00:00:00.001 Started by upstream project "autotest-per-patch" build number 132339 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.093 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.127 Fetching changes from the remote Git repository 00:00:00.129 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.165 Using shallow fetch with depth 1 00:00:00.165 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.165 > git --version # timeout=10 00:00:00.191 > git --version # 'git version 2.39.2' 00:00:00.191 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.212 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.212 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.921 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.934 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.946 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.946 > git config core.sparsecheckout # timeout=10 00:00:04.957 > git read-tree -mu HEAD # timeout=10 00:00:04.975 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.997 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.997 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.082 [Pipeline] Start of Pipeline 00:00:05.095 [Pipeline] library 00:00:05.097 Loading library shm_lib@master 00:00:05.097 Library shm_lib@master is cached. Copying from home. 00:00:05.108 [Pipeline] node 00:00:05.119 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.121 [Pipeline] { 00:00:05.131 [Pipeline] catchError 00:00:05.133 [Pipeline] { 00:00:05.144 [Pipeline] wrap 00:00:05.153 [Pipeline] { 00:00:05.158 [Pipeline] stage 00:00:05.160 [Pipeline] { (Prologue) 00:00:05.350 [Pipeline] sh 00:00:05.640 + logger -p user.info -t JENKINS-CI 00:00:05.660 [Pipeline] echo 00:00:05.662 Node: CYP9 00:00:05.669 [Pipeline] sh 00:00:05.974 [Pipeline] setCustomBuildProperty 00:00:05.984 [Pipeline] echo 00:00:05.985 Cleanup processes 00:00:05.988 [Pipeline] sh 00:00:06.279 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.279 2506628 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.292 [Pipeline] sh 00:00:06.583 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.583 ++ grep -v 'sudo pgrep' 00:00:06.583 ++ awk '{print $1}' 00:00:06.583 + sudo kill -9 00:00:06.583 + true 00:00:06.596 [Pipeline] cleanWs 00:00:06.606 [WS-CLEANUP] Deleting project workspace... 00:00:06.606 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.612 [WS-CLEANUP] done 00:00:06.616 [Pipeline] setCustomBuildProperty 00:00:06.627 [Pipeline] sh 00:00:06.911 + sudo git config --global --replace-all safe.directory '*' 00:00:06.979 [Pipeline] httpRequest 00:00:07.376 [Pipeline] echo 00:00:07.377 Sorcerer 10.211.164.20 is alive 00:00:07.386 [Pipeline] retry 00:00:07.388 [Pipeline] { 00:00:07.401 [Pipeline] httpRequest 00:00:07.406 HttpMethod: GET 00:00:07.406 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.407 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.430 Response Code: HTTP/1.1 200 OK 00:00:07.430 Success: Status code 200 is in the accepted range: 200,404 00:00:07.431 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.332 [Pipeline] } 00:00:22.343 [Pipeline] // retry 00:00:22.348 [Pipeline] sh 00:00:22.635 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.653 [Pipeline] httpRequest 00:00:22.959 [Pipeline] echo 00:00:22.961 Sorcerer 10.211.164.20 is alive 00:00:22.968 [Pipeline] retry 00:00:22.970 [Pipeline] { 00:00:22.981 [Pipeline] httpRequest 00:00:22.985 HttpMethod: GET 00:00:22.986 URL: http://10.211.164.20/packages/spdk_ac26332109db7bbaa74566894d6d0e204caf647d.tar.gz 00:00:22.986 Sending request to url: http://10.211.164.20/packages/spdk_ac26332109db7bbaa74566894d6d0e204caf647d.tar.gz 00:00:22.993 Response Code: HTTP/1.1 200 OK 00:00:22.993 Success: Status code 200 is in the accepted range: 200,404 00:00:22.993 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ac26332109db7bbaa74566894d6d0e204caf647d.tar.gz 00:04:56.307 [Pipeline] } 00:04:56.325 [Pipeline] // retry 00:04:56.332 [Pipeline] sh 00:04:56.628 + tar --no-same-owner -xf spdk_ac26332109db7bbaa74566894d6d0e204caf647d.tar.gz 00:04:59.947 [Pipeline] sh 00:05:00.238 + git -C spdk log --oneline -n5 00:05:00.238 ac2633210 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:05:00.238 3e396d94d bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:05:00.238 ecdb65a23 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:05:00.238 6745f139b bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:05:00.238 866ba5ffe bdev: Factor out checking bounce buffer necessity into helper function 00:05:00.251 [Pipeline] } 00:05:00.265 [Pipeline] // stage 00:05:00.274 [Pipeline] stage 00:05:00.276 [Pipeline] { (Prepare) 00:05:00.292 [Pipeline] writeFile 00:05:00.308 [Pipeline] sh 00:05:00.598 + logger -p user.info -t JENKINS-CI 00:05:00.613 [Pipeline] sh 00:05:00.902 + logger -p user.info -t JENKINS-CI 00:05:00.916 [Pipeline] sh 00:05:01.276 + cat autorun-spdk.conf 00:05:01.276 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:01.276 SPDK_TEST_NVMF=1 00:05:01.276 SPDK_TEST_NVME_CLI=1 00:05:01.276 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:01.276 SPDK_TEST_NVMF_NICS=e810 00:05:01.276 SPDK_TEST_VFIOUSER=1 00:05:01.276 SPDK_RUN_UBSAN=1 00:05:01.276 NET_TYPE=phy 00:05:01.285 RUN_NIGHTLY=0 00:05:01.289 [Pipeline] readFile 00:05:01.314 [Pipeline] withEnv 00:05:01.317 [Pipeline] { 00:05:01.329 [Pipeline] sh 00:05:01.619 + set -ex 00:05:01.619 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:01.619 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:01.619 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:01.619 ++ SPDK_TEST_NVMF=1 00:05:01.619 ++ SPDK_TEST_NVME_CLI=1 00:05:01.619 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:01.619 ++ SPDK_TEST_NVMF_NICS=e810 00:05:01.619 ++ SPDK_TEST_VFIOUSER=1 00:05:01.619 ++ SPDK_RUN_UBSAN=1 00:05:01.619 ++ NET_TYPE=phy 00:05:01.619 ++ RUN_NIGHTLY=0 00:05:01.619 + case $SPDK_TEST_NVMF_NICS in 00:05:01.619 + DRIVERS=ice 00:05:01.619 + [[ tcp == \r\d\m\a ]] 00:05:01.619 + [[ -n ice ]] 00:05:01.619 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:01.619 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:01.619 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:05:01.619 rmmod: ERROR: Module irdma is not currently loaded 00:05:01.619 rmmod: ERROR: Module i40iw is not currently loaded 00:05:01.619 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:01.619 + true 00:05:01.619 + for D in $DRIVERS 00:05:01.619 + sudo modprobe ice 00:05:01.619 + exit 0 00:05:01.629 [Pipeline] } 00:05:01.640 [Pipeline] // withEnv 00:05:01.645 [Pipeline] } 00:05:01.657 [Pipeline] // stage 00:05:01.666 [Pipeline] catchError 00:05:01.668 [Pipeline] { 00:05:01.681 [Pipeline] timeout 00:05:01.681 Timeout set to expire in 1 hr 0 min 00:05:01.683 [Pipeline] { 00:05:01.694 [Pipeline] stage 00:05:01.696 [Pipeline] { (Tests) 00:05:01.709 [Pipeline] sh 00:05:01.998 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:01.998 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:01.998 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:01.998 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:01.998 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:01.998 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:01.998 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:01.998 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:01.998 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:01.998 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:01.998 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:01.998 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:01.998 + source /etc/os-release 00:05:01.998 ++ NAME='Fedora Linux' 00:05:01.998 ++ VERSION='39 (Cloud Edition)' 00:05:01.998 ++ ID=fedora 00:05:01.998 ++ VERSION_ID=39 00:05:01.998 ++ VERSION_CODENAME= 00:05:01.998 ++ PLATFORM_ID=platform:f39 00:05:01.998 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:01.998 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:01.998 ++ LOGO=fedora-logo-icon 00:05:01.998 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:01.998 ++ HOME_URL=https://fedoraproject.org/ 00:05:01.998 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:01.998 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:01.998 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:01.998 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:01.998 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:01.998 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:01.998 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:01.998 ++ SUPPORT_END=2024-11-12 00:05:01.998 ++ VARIANT='Cloud Edition' 00:05:01.998 ++ VARIANT_ID=cloud 00:05:01.998 + uname -a 00:05:01.998 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:01.998 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:05.301 Hugepages 00:05:05.301 node hugesize free / total 00:05:05.301 node0 1048576kB 0 / 0 00:05:05.301 node0 2048kB 0 / 0 00:05:05.301 node1 1048576kB 0 / 0 00:05:05.301 node1 2048kB 0 / 0 00:05:05.301 00:05:05.301 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.301 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:05.301 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:05.301 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:05.301 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:05.301 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:05.301 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:05.301 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:05.301 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:05.301 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:05.301 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:05.301 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:05.301 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:05.301 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:05.301 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:05.301 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:05.301 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:05.301 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:05.301 + rm -f /tmp/spdk-ld-path 00:05:05.301 + source autorun-spdk.conf 00:05:05.301 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:05.301 ++ SPDK_TEST_NVMF=1 00:05:05.301 ++ SPDK_TEST_NVME_CLI=1 00:05:05.301 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:05.301 ++ SPDK_TEST_NVMF_NICS=e810 00:05:05.301 ++ SPDK_TEST_VFIOUSER=1 00:05:05.301 ++ SPDK_RUN_UBSAN=1 00:05:05.301 ++ NET_TYPE=phy 00:05:05.301 ++ RUN_NIGHTLY=0 00:05:05.301 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:05.301 + [[ -n '' ]] 00:05:05.301 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.301 + for M in /var/spdk/build-*-manifest.txt 00:05:05.301 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:05.301 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:05.301 + for M in /var/spdk/build-*-manifest.txt 00:05:05.301 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:05.301 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:05.301 + for M in /var/spdk/build-*-manifest.txt 00:05:05.301 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:05.301 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:05.301 ++ uname 00:05:05.301 + [[ Linux == \L\i\n\u\x ]] 00:05:05.301 + sudo dmesg -T 00:05:05.301 + sudo dmesg --clear 00:05:05.301 + dmesg_pid=2508768 00:05:05.301 + [[ Fedora Linux == FreeBSD ]] 00:05:05.301 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:05.301 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:05.301 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:05.301 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:05:05.301 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:05:05.301 + [[ -x /usr/src/fio-static/fio ]] 00:05:05.301 + export FIO_BIN=/usr/src/fio-static/fio 00:05:05.301 + FIO_BIN=/usr/src/fio-static/fio 00:05:05.301 + sudo dmesg -Tw 00:05:05.301 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:05.301 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:05.301 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:05.301 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:05.301 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:05.301 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:05.301 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:05.301 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:05.301 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:05.301 06:15:25 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:05.301 06:15:25 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:05.301 06:15:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:05.301 06:15:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:05.301 06:15:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:05:05.301 06:15:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:05.301 06:15:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:05:05.301 06:15:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:05:05.301 06:15:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:05:05.301 06:15:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:05:05.301 06:15:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:05:05.301 06:15:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:05.301 06:15:25 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:05.563 06:15:25 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:05.563 06:15:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.563 06:15:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:05.563 06:15:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:05.563 06:15:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.563 06:15:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.563 06:15:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.563 06:15:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.563 06:15:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.563 06:15:25 -- paths/export.sh@5 -- $ export PATH 00:05:05.563 06:15:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.563 06:15:25 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:05.563 06:15:25 -- common/autobuild_common.sh@486 -- $ date +%s 00:05:05.563 06:15:25 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732079725.XXXXXX 00:05:05.563 06:15:25 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732079725.cK68TH 00:05:05.563 06:15:25 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:05:05.563 06:15:25 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:05:05.563 06:15:25 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:05.563 06:15:25 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:05.563 06:15:25 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:05.563 06:15:25 -- common/autobuild_common.sh@502 -- $ get_config_params 00:05:05.563 06:15:25 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:05:05.563 06:15:25 -- common/autotest_common.sh@10 -- $ set +x 00:05:05.563 06:15:25 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:05.563 06:15:25 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:05:05.563 06:15:25 -- pm/common@17 -- $ local monitor 00:05:05.563 06:15:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.563 06:15:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.563 06:15:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.563 06:15:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.563 06:15:25 -- pm/common@21 -- $ date +%s 00:05:05.563 06:15:25 -- pm/common@21 -- $ date +%s 00:05:05.563 06:15:25 -- pm/common@25 -- $ sleep 1 00:05:05.563 06:15:25 -- pm/common@21 -- $ date +%s 00:05:05.563 06:15:25 -- pm/common@21 -- $ date +%s 00:05:05.563 06:15:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079725 00:05:05.563 06:15:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079725 00:05:05.563 06:15:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079725 00:05:05.563 06:15:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079725 00:05:05.563 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079725_collect-cpu-load.pm.log 00:05:05.563 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079725_collect-vmstat.pm.log 00:05:05.563 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079725_collect-cpu-temp.pm.log 00:05:05.563 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079725_collect-bmc-pm.bmc.pm.log 00:05:06.506 06:15:26 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:05:06.506 06:15:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:06.506 06:15:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:06.506 06:15:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.506 06:15:26 -- spdk/autobuild.sh@16 -- $ date -u 00:05:06.506 Wed Nov 20 05:15:26 AM UTC 2024 00:05:06.506 06:15:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:06.506 v25.01-pre-197-gac2633210 00:05:06.506 06:15:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:06.506 06:15:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:06.506 06:15:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:06.506 06:15:26 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:06.506 06:15:26 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:06.506 06:15:26 -- common/autotest_common.sh@10 -- $ set +x 00:05:06.506 ************************************ 00:05:06.506 START TEST ubsan 00:05:06.506 ************************************ 00:05:06.506 06:15:26 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:05:06.506 using ubsan 00:05:06.506 00:05:06.506 real 0m0.001s 00:05:06.506 user 0m0.000s 00:05:06.506 sys 0m0.000s 00:05:06.506 06:15:26 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:06.506 06:15:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:06.506 ************************************ 00:05:06.506 END TEST ubsan 00:05:06.506 ************************************ 00:05:06.767 06:15:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:06.767 06:15:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:06.767 06:15:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:06.767 06:15:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:06.767 06:15:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:06.767 06:15:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:06.767 06:15:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:06.767 06:15:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:06.767 06:15:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:06.767 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:06.767 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:07.339 Using 'verbs' RDMA provider 00:05:22.825 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:37.741 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:37.741 Creating mk/config.mk...done. 00:05:37.741 Creating mk/cc.flags.mk...done. 00:05:37.741 Type 'make' to build. 00:05:37.741 06:15:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:05:37.741 06:15:56 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:37.741 06:15:56 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:37.741 06:15:56 -- common/autotest_common.sh@10 -- $ set +x 00:05:37.741 ************************************ 00:05:37.741 START TEST make 00:05:37.741 ************************************ 00:05:37.741 06:15:56 make -- common/autotest_common.sh@1127 -- $ make -j144 00:05:37.741 make[1]: Nothing to be done for 'all'. 00:05:37.741 The Meson build system 00:05:37.741 Version: 1.5.0 00:05:37.741 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:37.741 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:37.741 Build type: native build 00:05:37.741 Project name: libvfio-user 00:05:37.741 Project version: 0.0.1 00:05:37.741 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:37.741 C linker for the host machine: cc ld.bfd 2.40-14 00:05:37.741 Host machine cpu family: x86_64 00:05:37.741 Host machine cpu: x86_64 00:05:37.741 Run-time dependency threads found: YES 00:05:37.741 Library dl found: YES 00:05:37.741 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:37.741 Run-time dependency json-c found: YES 0.17 00:05:37.741 Run-time dependency cmocka found: YES 1.1.7 00:05:37.741 Program pytest-3 found: NO 00:05:37.741 Program flake8 found: NO 00:05:37.741 Program misspell-fixer found: NO 00:05:37.741 Program restructuredtext-lint found: NO 00:05:37.741 Program valgrind found: YES (/usr/bin/valgrind) 00:05:37.741 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:37.741 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:37.741 Compiler for C supports arguments -Wwrite-strings: YES 00:05:37.741 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:37.741 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:37.741 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:37.741 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:37.741 Build targets in project: 8 00:05:37.741 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:37.741 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:37.741 00:05:37.741 libvfio-user 0.0.1 00:05:37.741 00:05:37.741 User defined options 00:05:37.741 buildtype : debug 00:05:37.741 default_library: shared 00:05:37.741 libdir : /usr/local/lib 00:05:37.741 00:05:37.741 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:38.312 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:38.312 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:38.312 [2/37] Compiling C object samples/null.p/null.c.o 00:05:38.312 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:38.312 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:38.312 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:38.312 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:38.312 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:38.312 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:38.312 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:38.312 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:38.312 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:38.312 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:38.312 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:38.312 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:38.312 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:38.312 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:38.312 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:38.312 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:38.312 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:38.312 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:38.312 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:38.312 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:38.312 [23/37] Compiling C object samples/server.p/server.c.o 00:05:38.312 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:38.312 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:38.312 [26/37] Compiling C object samples/client.p/client.c.o 00:05:38.574 [27/37] Linking target samples/client 00:05:38.574 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:38.574 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:38.574 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:05:38.574 [31/37] Linking target test/unit_tests 00:05:38.574 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:38.835 [33/37] Linking target samples/null 00:05:38.835 [34/37] Linking target samples/server 00:05:38.835 [35/37] Linking target samples/gpio-pci-idio-16 00:05:38.835 [36/37] Linking target samples/lspci 00:05:38.835 [37/37] Linking target samples/shadow_ioeventfd_server 00:05:38.835 INFO: autodetecting backend as ninja 00:05:38.835 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:38.835 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:39.095 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:39.095 ninja: no work to do. 00:05:45.687 The Meson build system 00:05:45.687 Version: 1.5.0 00:05:45.687 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:45.687 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:45.687 Build type: native build 00:05:45.687 Program cat found: YES (/usr/bin/cat) 00:05:45.687 Project name: DPDK 00:05:45.687 Project version: 24.03.0 00:05:45.687 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:45.687 C linker for the host machine: cc ld.bfd 2.40-14 00:05:45.687 Host machine cpu family: x86_64 00:05:45.687 Host machine cpu: x86_64 00:05:45.687 Message: ## Building in Developer Mode ## 00:05:45.687 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:45.687 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:45.687 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:45.687 Program python3 found: YES (/usr/bin/python3) 00:05:45.687 Program cat found: YES (/usr/bin/cat) 00:05:45.687 Compiler for C supports arguments -march=native: YES 00:05:45.687 Checking for size of "void *" : 8 00:05:45.687 Checking for size of "void *" : 8 (cached) 00:05:45.687 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:45.687 Library m found: YES 00:05:45.687 Library numa found: YES 00:05:45.687 Has header "numaif.h" : YES 00:05:45.687 Library fdt found: NO 00:05:45.687 Library execinfo found: NO 00:05:45.687 Has header "execinfo.h" : YES 00:05:45.687 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:45.687 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:45.687 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:45.687 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:45.687 Run-time dependency openssl found: YES 3.1.1 00:05:45.687 Run-time dependency libpcap found: YES 1.10.4 00:05:45.688 Has header "pcap.h" with dependency libpcap: YES 00:05:45.688 Compiler for C supports arguments -Wcast-qual: YES 00:05:45.688 Compiler for C supports arguments -Wdeprecated: YES 00:05:45.688 Compiler for C supports arguments -Wformat: YES 00:05:45.688 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:45.688 Compiler for C supports arguments -Wformat-security: NO 00:05:45.688 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:45.688 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:45.688 Compiler for C supports arguments -Wnested-externs: YES 00:05:45.688 Compiler for C supports arguments -Wold-style-definition: YES 00:05:45.688 Compiler for C supports arguments -Wpointer-arith: YES 00:05:45.688 Compiler for C supports arguments -Wsign-compare: YES 00:05:45.688 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:45.688 Compiler for C supports arguments -Wundef: YES 00:05:45.688 Compiler for C supports arguments -Wwrite-strings: YES 00:05:45.688 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:45.688 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:45.688 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:45.688 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:45.688 Program objdump found: YES (/usr/bin/objdump) 00:05:45.688 Compiler for C supports arguments -mavx512f: YES 00:05:45.688 Checking if "AVX512 checking" compiles: YES 00:05:45.688 Fetching value of define "__SSE4_2__" : 1 00:05:45.688 Fetching value of define "__AES__" : 1 00:05:45.688 Fetching value of define "__AVX__" : 1 00:05:45.688 Fetching value of define "__AVX2__" : 1 00:05:45.688 Fetching value of define "__AVX512BW__" : 1 00:05:45.688 Fetching value of define "__AVX512CD__" : 1 00:05:45.688 Fetching value of define "__AVX512DQ__" : 1 00:05:45.688 Fetching value of define "__AVX512F__" : 1 00:05:45.688 Fetching value of define "__AVX512VL__" : 1 00:05:45.688 Fetching value of define "__PCLMUL__" : 1 00:05:45.688 Fetching value of define "__RDRND__" : 1 00:05:45.688 Fetching value of define "__RDSEED__" : 1 00:05:45.688 Fetching value of define "__VPCLMULQDQ__" : 1 00:05:45.688 Fetching value of define "__znver1__" : (undefined) 00:05:45.688 Fetching value of define "__znver2__" : (undefined) 00:05:45.688 Fetching value of define "__znver3__" : (undefined) 00:05:45.688 Fetching value of define "__znver4__" : (undefined) 00:05:45.688 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:45.688 Message: lib/log: Defining dependency "log" 00:05:45.688 Message: lib/kvargs: Defining dependency "kvargs" 00:05:45.688 Message: lib/telemetry: Defining dependency "telemetry" 00:05:45.688 Checking for function "getentropy" : NO 00:05:45.688 Message: lib/eal: Defining dependency "eal" 00:05:45.688 Message: lib/ring: Defining dependency "ring" 00:05:45.688 Message: lib/rcu: Defining dependency "rcu" 00:05:45.688 Message: lib/mempool: Defining dependency "mempool" 00:05:45.688 Message: lib/mbuf: Defining dependency "mbuf" 00:05:45.688 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:45.688 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:45.688 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:45.688 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:45.688 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:45.688 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:05:45.688 Compiler for C supports arguments -mpclmul: YES 00:05:45.688 Compiler for C supports arguments -maes: YES 00:05:45.688 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:45.688 Compiler for C supports arguments -mavx512bw: YES 00:05:45.688 Compiler for C supports arguments -mavx512dq: YES 00:05:45.688 Compiler for C supports arguments -mavx512vl: YES 00:05:45.688 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:45.688 Compiler for C supports arguments -mavx2: YES 00:05:45.688 Compiler for C supports arguments -mavx: YES 00:05:45.688 Message: lib/net: Defining dependency "net" 00:05:45.688 Message: lib/meter: Defining dependency "meter" 00:05:45.688 Message: lib/ethdev: Defining dependency "ethdev" 00:05:45.688 Message: lib/pci: Defining dependency "pci" 00:05:45.688 Message: lib/cmdline: Defining dependency "cmdline" 00:05:45.688 Message: lib/hash: Defining dependency "hash" 00:05:45.688 Message: lib/timer: Defining dependency "timer" 00:05:45.688 Message: lib/compressdev: Defining dependency "compressdev" 00:05:45.688 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:45.688 Message: lib/dmadev: Defining dependency "dmadev" 00:05:45.688 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:45.688 Message: lib/power: Defining dependency "power" 00:05:45.688 Message: lib/reorder: Defining dependency "reorder" 00:05:45.688 Message: lib/security: Defining dependency "security" 00:05:45.688 Has header "linux/userfaultfd.h" : YES 00:05:45.688 Has header "linux/vduse.h" : YES 00:05:45.688 Message: lib/vhost: Defining dependency "vhost" 00:05:45.688 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:45.688 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:45.688 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:45.688 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:45.688 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:45.688 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:45.688 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:45.688 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:45.688 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:45.688 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:45.688 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:45.688 Configuring doxy-api-html.conf using configuration 00:05:45.688 Configuring doxy-api-man.conf using configuration 00:05:45.688 Program mandb found: YES (/usr/bin/mandb) 00:05:45.688 Program sphinx-build found: NO 00:05:45.688 Configuring rte_build_config.h using configuration 00:05:45.688 Message: 00:05:45.688 ================= 00:05:45.688 Applications Enabled 00:05:45.688 ================= 00:05:45.688 00:05:45.688 apps: 00:05:45.688 00:05:45.688 00:05:45.688 Message: 00:05:45.688 ================= 00:05:45.688 Libraries Enabled 00:05:45.688 ================= 00:05:45.688 00:05:45.688 libs: 00:05:45.688 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:45.688 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:45.688 cryptodev, dmadev, power, reorder, security, vhost, 00:05:45.688 00:05:45.688 Message: 00:05:45.688 =============== 00:05:45.688 Drivers Enabled 00:05:45.688 =============== 00:05:45.688 00:05:45.688 common: 00:05:45.688 00:05:45.688 bus: 00:05:45.688 pci, vdev, 00:05:45.688 mempool: 00:05:45.688 ring, 00:05:45.688 dma: 00:05:45.688 00:05:45.688 net: 00:05:45.688 00:05:45.688 crypto: 00:05:45.688 00:05:45.688 compress: 00:05:45.688 00:05:45.688 vdpa: 00:05:45.688 00:05:45.688 00:05:45.688 Message: 00:05:45.688 ================= 00:05:45.688 Content Skipped 00:05:45.688 ================= 00:05:45.688 00:05:45.688 apps: 00:05:45.688 dumpcap: explicitly disabled via build config 00:05:45.688 graph: explicitly disabled via build config 00:05:45.688 pdump: explicitly disabled via build config 00:05:45.688 proc-info: explicitly disabled via build config 00:05:45.688 test-acl: explicitly disabled via build config 00:05:45.688 test-bbdev: explicitly disabled via build config 00:05:45.688 test-cmdline: explicitly disabled via build config 00:05:45.688 test-compress-perf: explicitly disabled via build config 00:05:45.688 test-crypto-perf: explicitly disabled via build config 00:05:45.688 test-dma-perf: explicitly disabled via build config 00:05:45.688 test-eventdev: explicitly disabled via build config 00:05:45.688 test-fib: explicitly disabled via build config 00:05:45.688 test-flow-perf: explicitly disabled via build config 00:05:45.688 test-gpudev: explicitly disabled via build config 00:05:45.688 test-mldev: explicitly disabled via build config 00:05:45.688 test-pipeline: explicitly disabled via build config 00:05:45.688 test-pmd: explicitly disabled via build config 00:05:45.688 test-regex: explicitly disabled via build config 00:05:45.688 test-sad: explicitly disabled via build config 00:05:45.688 test-security-perf: explicitly disabled via build config 00:05:45.688 00:05:45.688 libs: 00:05:45.688 argparse: explicitly disabled via build config 00:05:45.688 metrics: explicitly disabled via build config 00:05:45.688 acl: explicitly disabled via build config 00:05:45.688 bbdev: explicitly disabled via build config 00:05:45.688 bitratestats: explicitly disabled via build config 00:05:45.688 bpf: explicitly disabled via build config 00:05:45.688 cfgfile: explicitly disabled via build config 00:05:45.688 distributor: explicitly disabled via build config 00:05:45.688 efd: explicitly disabled via build config 00:05:45.688 eventdev: explicitly disabled via build config 00:05:45.688 dispatcher: explicitly disabled via build config 00:05:45.688 gpudev: explicitly disabled via build config 00:05:45.688 gro: explicitly disabled via build config 00:05:45.688 gso: explicitly disabled via build config 00:05:45.688 ip_frag: explicitly disabled via build config 00:05:45.688 jobstats: explicitly disabled via build config 00:05:45.688 latencystats: explicitly disabled via build config 00:05:45.688 lpm: explicitly disabled via build config 00:05:45.688 member: explicitly disabled via build config 00:05:45.688 pcapng: explicitly disabled via build config 00:05:45.688 rawdev: explicitly disabled via build config 00:05:45.688 regexdev: explicitly disabled via build config 00:05:45.688 mldev: explicitly disabled via build config 00:05:45.688 rib: explicitly disabled via build config 00:05:45.688 sched: explicitly disabled via build config 00:05:45.688 stack: explicitly disabled via build config 00:05:45.688 ipsec: explicitly disabled via build config 00:05:45.688 pdcp: explicitly disabled via build config 00:05:45.688 fib: explicitly disabled via build config 00:05:45.689 port: explicitly disabled via build config 00:05:45.689 pdump: explicitly disabled via build config 00:05:45.689 table: explicitly disabled via build config 00:05:45.689 pipeline: explicitly disabled via build config 00:05:45.689 graph: explicitly disabled via build config 00:05:45.689 node: explicitly disabled via build config 00:05:45.689 00:05:45.689 drivers: 00:05:45.689 common/cpt: not in enabled drivers build config 00:05:45.689 common/dpaax: not in enabled drivers build config 00:05:45.689 common/iavf: not in enabled drivers build config 00:05:45.689 common/idpf: not in enabled drivers build config 00:05:45.689 common/ionic: not in enabled drivers build config 00:05:45.689 common/mvep: not in enabled drivers build config 00:05:45.689 common/octeontx: not in enabled drivers build config 00:05:45.689 bus/auxiliary: not in enabled drivers build config 00:05:45.689 bus/cdx: not in enabled drivers build config 00:05:45.689 bus/dpaa: not in enabled drivers build config 00:05:45.689 bus/fslmc: not in enabled drivers build config 00:05:45.689 bus/ifpga: not in enabled drivers build config 00:05:45.689 bus/platform: not in enabled drivers build config 00:05:45.689 bus/uacce: not in enabled drivers build config 00:05:45.689 bus/vmbus: not in enabled drivers build config 00:05:45.689 common/cnxk: not in enabled drivers build config 00:05:45.689 common/mlx5: not in enabled drivers build config 00:05:45.689 common/nfp: not in enabled drivers build config 00:05:45.689 common/nitrox: not in enabled drivers build config 00:05:45.689 common/qat: not in enabled drivers build config 00:05:45.689 common/sfc_efx: not in enabled drivers build config 00:05:45.689 mempool/bucket: not in enabled drivers build config 00:05:45.689 mempool/cnxk: not in enabled drivers build config 00:05:45.689 mempool/dpaa: not in enabled drivers build config 00:05:45.689 mempool/dpaa2: not in enabled drivers build config 00:05:45.689 mempool/octeontx: not in enabled drivers build config 00:05:45.689 mempool/stack: not in enabled drivers build config 00:05:45.689 dma/cnxk: not in enabled drivers build config 00:05:45.689 dma/dpaa: not in enabled drivers build config 00:05:45.689 dma/dpaa2: not in enabled drivers build config 00:05:45.689 dma/hisilicon: not in enabled drivers build config 00:05:45.689 dma/idxd: not in enabled drivers build config 00:05:45.689 dma/ioat: not in enabled drivers build config 00:05:45.689 dma/skeleton: not in enabled drivers build config 00:05:45.689 net/af_packet: not in enabled drivers build config 00:05:45.689 net/af_xdp: not in enabled drivers build config 00:05:45.689 net/ark: not in enabled drivers build config 00:05:45.689 net/atlantic: not in enabled drivers build config 00:05:45.689 net/avp: not in enabled drivers build config 00:05:45.689 net/axgbe: not in enabled drivers build config 00:05:45.689 net/bnx2x: not in enabled drivers build config 00:05:45.689 net/bnxt: not in enabled drivers build config 00:05:45.689 net/bonding: not in enabled drivers build config 00:05:45.689 net/cnxk: not in enabled drivers build config 00:05:45.689 net/cpfl: not in enabled drivers build config 00:05:45.689 net/cxgbe: not in enabled drivers build config 00:05:45.689 net/dpaa: not in enabled drivers build config 00:05:45.689 net/dpaa2: not in enabled drivers build config 00:05:45.689 net/e1000: not in enabled drivers build config 00:05:45.689 net/ena: not in enabled drivers build config 00:05:45.689 net/enetc: not in enabled drivers build config 00:05:45.689 net/enetfec: not in enabled drivers build config 00:05:45.689 net/enic: not in enabled drivers build config 00:05:45.689 net/failsafe: not in enabled drivers build config 00:05:45.689 net/fm10k: not in enabled drivers build config 00:05:45.689 net/gve: not in enabled drivers build config 00:05:45.689 net/hinic: not in enabled drivers build config 00:05:45.689 net/hns3: not in enabled drivers build config 00:05:45.689 net/i40e: not in enabled drivers build config 00:05:45.689 net/iavf: not in enabled drivers build config 00:05:45.689 net/ice: not in enabled drivers build config 00:05:45.689 net/idpf: not in enabled drivers build config 00:05:45.689 net/igc: not in enabled drivers build config 00:05:45.689 net/ionic: not in enabled drivers build config 00:05:45.689 net/ipn3ke: not in enabled drivers build config 00:05:45.689 net/ixgbe: not in enabled drivers build config 00:05:45.689 net/mana: not in enabled drivers build config 00:05:45.689 net/memif: not in enabled drivers build config 00:05:45.689 net/mlx4: not in enabled drivers build config 00:05:45.689 net/mlx5: not in enabled drivers build config 00:05:45.689 net/mvneta: not in enabled drivers build config 00:05:45.689 net/mvpp2: not in enabled drivers build config 00:05:45.689 net/netvsc: not in enabled drivers build config 00:05:45.689 net/nfb: not in enabled drivers build config 00:05:45.689 net/nfp: not in enabled drivers build config 00:05:45.689 net/ngbe: not in enabled drivers build config 00:05:45.689 net/null: not in enabled drivers build config 00:05:45.689 net/octeontx: not in enabled drivers build config 00:05:45.689 net/octeon_ep: not in enabled drivers build config 00:05:45.689 net/pcap: not in enabled drivers build config 00:05:45.689 net/pfe: not in enabled drivers build config 00:05:45.689 net/qede: not in enabled drivers build config 00:05:45.689 net/ring: not in enabled drivers build config 00:05:45.689 net/sfc: not in enabled drivers build config 00:05:45.689 net/softnic: not in enabled drivers build config 00:05:45.689 net/tap: not in enabled drivers build config 00:05:45.689 net/thunderx: not in enabled drivers build config 00:05:45.689 net/txgbe: not in enabled drivers build config 00:05:45.689 net/vdev_netvsc: not in enabled drivers build config 00:05:45.689 net/vhost: not in enabled drivers build config 00:05:45.689 net/virtio: not in enabled drivers build config 00:05:45.689 net/vmxnet3: not in enabled drivers build config 00:05:45.689 raw/*: missing internal dependency, "rawdev" 00:05:45.689 crypto/armv8: not in enabled drivers build config 00:05:45.689 crypto/bcmfs: not in enabled drivers build config 00:05:45.689 crypto/caam_jr: not in enabled drivers build config 00:05:45.689 crypto/ccp: not in enabled drivers build config 00:05:45.689 crypto/cnxk: not in enabled drivers build config 00:05:45.689 crypto/dpaa_sec: not in enabled drivers build config 00:05:45.689 crypto/dpaa2_sec: not in enabled drivers build config 00:05:45.689 crypto/ipsec_mb: not in enabled drivers build config 00:05:45.689 crypto/mlx5: not in enabled drivers build config 00:05:45.689 crypto/mvsam: not in enabled drivers build config 00:05:45.689 crypto/nitrox: not in enabled drivers build config 00:05:45.689 crypto/null: not in enabled drivers build config 00:05:45.689 crypto/octeontx: not in enabled drivers build config 00:05:45.689 crypto/openssl: not in enabled drivers build config 00:05:45.689 crypto/scheduler: not in enabled drivers build config 00:05:45.689 crypto/uadk: not in enabled drivers build config 00:05:45.689 crypto/virtio: not in enabled drivers build config 00:05:45.689 compress/isal: not in enabled drivers build config 00:05:45.689 compress/mlx5: not in enabled drivers build config 00:05:45.689 compress/nitrox: not in enabled drivers build config 00:05:45.689 compress/octeontx: not in enabled drivers build config 00:05:45.689 compress/zlib: not in enabled drivers build config 00:05:45.689 regex/*: missing internal dependency, "regexdev" 00:05:45.689 ml/*: missing internal dependency, "mldev" 00:05:45.689 vdpa/ifc: not in enabled drivers build config 00:05:45.689 vdpa/mlx5: not in enabled drivers build config 00:05:45.689 vdpa/nfp: not in enabled drivers build config 00:05:45.689 vdpa/sfc: not in enabled drivers build config 00:05:45.689 event/*: missing internal dependency, "eventdev" 00:05:45.689 baseband/*: missing internal dependency, "bbdev" 00:05:45.689 gpu/*: missing internal dependency, "gpudev" 00:05:45.689 00:05:45.689 00:05:45.689 Build targets in project: 84 00:05:45.689 00:05:45.689 DPDK 24.03.0 00:05:45.689 00:05:45.689 User defined options 00:05:45.689 buildtype : debug 00:05:45.689 default_library : shared 00:05:45.689 libdir : lib 00:05:45.689 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:45.689 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:45.689 c_link_args : 00:05:45.689 cpu_instruction_set: native 00:05:45.689 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:05:45.689 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:05:45.689 enable_docs : false 00:05:45.689 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:45.689 enable_kmods : false 00:05:45.689 max_lcores : 128 00:05:45.689 tests : false 00:05:45.689 00:05:45.689 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:45.689 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:45.689 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:45.689 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:45.689 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:45.689 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:45.689 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:45.689 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:45.689 [7/267] Linking static target lib/librte_kvargs.a 00:05:45.689 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:45.689 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:45.689 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:45.689 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:45.689 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:45.689 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:45.689 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:45.689 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:45.689 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:45.951 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:45.951 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:45.951 [19/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:45.951 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:45.951 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:45.951 [22/267] Linking static target lib/librte_log.a 00:05:45.951 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:45.951 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:45.951 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:45.951 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:45.951 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:45.951 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:45.951 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:45.951 [30/267] Linking static target lib/librte_pci.a 00:05:45.951 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:45.951 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:45.951 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:45.951 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:45.951 [35/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:45.951 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:45.951 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:45.951 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:46.210 [39/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:46.210 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.210 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.210 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:46.210 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:46.210 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:46.210 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:46.210 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:46.210 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:46.210 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:46.210 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:46.210 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:46.210 [51/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:46.210 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:46.210 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:46.210 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:46.210 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:46.210 [56/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:46.210 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:46.210 [58/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:46.210 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:46.210 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:46.210 [61/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:46.210 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:46.210 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:46.210 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:46.210 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:46.210 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:46.210 [67/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:46.210 [68/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:46.210 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:46.210 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:46.210 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:46.210 [72/267] Linking static target lib/librte_timer.a 00:05:46.210 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:46.210 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:46.210 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:46.210 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:46.210 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:46.210 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:46.210 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:46.210 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:46.210 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:46.210 [82/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:46.210 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:46.210 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:46.210 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:46.210 [86/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:46.210 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:46.210 [88/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:46.210 [89/267] Linking static target lib/librte_meter.a 00:05:46.210 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:46.210 [91/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:46.210 [92/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:46.210 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:46.210 [94/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:46.210 [95/267] Linking static target lib/librte_telemetry.a 00:05:46.210 [96/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:46.210 [97/267] Linking static target lib/librte_ring.a 00:05:46.210 [98/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:46.210 [99/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:46.210 [100/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:46.210 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:46.472 [102/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:46.472 [103/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:46.472 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:46.472 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:46.472 [106/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:46.472 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:46.472 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:46.472 [109/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:46.472 [110/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:05:46.472 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:46.472 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:46.472 [113/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:46.472 [114/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:46.472 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:46.472 [116/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:46.472 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:46.472 [118/267] Linking static target lib/librte_cmdline.a 00:05:46.472 [119/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:46.472 [120/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:46.472 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:46.472 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:46.472 [123/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:46.472 [124/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:46.472 [125/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:46.472 [126/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:46.472 [127/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:46.472 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:46.472 [129/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:46.472 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:46.472 [131/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:46.472 [132/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:46.472 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:46.472 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:46.472 [135/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:46.472 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:46.472 [137/267] Linking static target lib/librte_rcu.a 00:05:46.472 [138/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.472 [139/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:46.472 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:46.472 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:46.472 [142/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:46.472 [143/267] Linking static target lib/librte_net.a 00:05:46.472 [144/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:46.472 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:46.472 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:46.472 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:46.472 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:46.472 [149/267] Linking static target lib/librte_compressdev.a 00:05:46.472 [150/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:46.472 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:46.472 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:46.472 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:46.472 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:46.472 [155/267] Linking static target lib/librte_dmadev.a 00:05:46.472 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:46.472 [157/267] Linking target lib/librte_log.so.24.1 00:05:46.472 [158/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:46.472 [159/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:46.472 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:46.472 [161/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:46.472 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:46.472 [163/267] Linking static target lib/librte_reorder.a 00:05:46.472 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:46.472 [165/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:46.473 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:46.473 [167/267] Linking static target lib/librte_eal.a 00:05:46.473 [168/267] Linking static target lib/librte_mempool.a 00:05:46.473 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:46.473 [170/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:46.473 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:46.473 [172/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:46.473 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:46.473 [174/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:46.473 [175/267] Linking static target lib/librte_power.a 00:05:46.473 [176/267] Linking static target lib/librte_security.a 00:05:46.473 [177/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:46.473 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:46.734 [179/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:46.734 [180/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:46.734 [181/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.734 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:46.734 [183/267] Linking static target drivers/librte_bus_vdev.a 00:05:46.734 [184/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:46.734 [185/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:46.734 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:46.734 [187/267] Linking target lib/librte_kvargs.so.24.1 00:05:46.734 [188/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:46.734 [189/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.734 [190/267] Linking static target lib/librte_hash.a 00:05:46.734 [191/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:46.734 [192/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:46.734 [193/267] Linking static target lib/librte_mbuf.a 00:05:46.734 [194/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:46.734 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:46.734 [196/267] Linking static target drivers/librte_bus_pci.a 00:05:46.734 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:46.734 [198/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.734 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:46.734 [200/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:46.734 [201/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:46.734 [202/267] Linking static target lib/librte_cryptodev.a 00:05:46.734 [203/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.734 [204/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:46.734 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.734 [206/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:46.995 [207/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:46.995 [208/267] Linking static target drivers/librte_mempool_ring.a 00:05:46.995 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:46.995 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.995 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.995 [212/267] Linking target lib/librte_telemetry.so.24.1 00:05:46.995 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.256 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:47.256 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.256 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.256 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.256 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:47.256 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:47.256 [220/267] Linking static target lib/librte_ethdev.a 00:05:47.517 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.517 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.517 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.778 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.778 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.778 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.721 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:48.721 [228/267] Linking static target lib/librte_vhost.a 00:05:48.982 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.896 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.483 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.053 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.053 [233/267] Linking target lib/librte_eal.so.24.1 00:05:58.053 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:58.314 [235/267] Linking target lib/librte_ring.so.24.1 00:05:58.314 [236/267] Linking target lib/librte_timer.so.24.1 00:05:58.314 [237/267] Linking target lib/librte_meter.so.24.1 00:05:58.314 [238/267] Linking target lib/librte_pci.so.24.1 00:05:58.314 [239/267] Linking target lib/librte_dmadev.so.24.1 00:05:58.314 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:05:58.314 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:58.314 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:58.314 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:58.314 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:58.314 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:58.314 [246/267] Linking target lib/librte_rcu.so.24.1 00:05:58.314 [247/267] Linking target lib/librte_mempool.so.24.1 00:05:58.314 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:05:58.574 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:58.574 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:58.574 [251/267] Linking target lib/librte_mbuf.so.24.1 00:05:58.574 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:05:58.574 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:58.834 [254/267] Linking target lib/librte_net.so.24.1 00:05:58.834 [255/267] Linking target lib/librte_compressdev.so.24.1 00:05:58.834 [256/267] Linking target lib/librte_reorder.so.24.1 00:05:58.834 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:05:58.834 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:58.834 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:58.834 [260/267] Linking target lib/librte_cmdline.so.24.1 00:05:58.834 [261/267] Linking target lib/librte_hash.so.24.1 00:05:58.834 [262/267] Linking target lib/librte_ethdev.so.24.1 00:05:58.834 [263/267] Linking target lib/librte_security.so.24.1 00:05:59.094 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:59.094 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:59.094 [266/267] Linking target lib/librte_power.so.24.1 00:05:59.094 [267/267] Linking target lib/librte_vhost.so.24.1 00:05:59.094 INFO: autodetecting backend as ninja 00:05:59.094 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:06:02.392 CC lib/log/log.o 00:06:02.392 CC lib/log/log_flags.o 00:06:02.392 CC lib/log/log_deprecated.o 00:06:02.392 CC lib/ut/ut.o 00:06:02.392 CC lib/ut_mock/mock.o 00:06:02.392 LIB libspdk_ut_mock.a 00:06:02.392 LIB libspdk_ut.a 00:06:02.392 LIB libspdk_log.a 00:06:02.392 SO libspdk_ut_mock.so.6.0 00:06:02.392 SO libspdk_ut.so.2.0 00:06:02.392 SO libspdk_log.so.7.1 00:06:02.392 SYMLINK libspdk_ut_mock.so 00:06:02.392 SYMLINK libspdk_ut.so 00:06:02.392 SYMLINK libspdk_log.so 00:06:02.964 CC lib/util/base64.o 00:06:02.964 CC lib/util/bit_array.o 00:06:02.964 CC lib/util/cpuset.o 00:06:02.964 CC lib/util/crc16.o 00:06:02.964 CC lib/dma/dma.o 00:06:02.964 CC lib/ioat/ioat.o 00:06:02.964 CC lib/util/crc32.o 00:06:02.964 CC lib/util/crc32c.o 00:06:02.964 CC lib/util/crc32_ieee.o 00:06:02.964 CC lib/util/crc64.o 00:06:02.964 CXX lib/trace_parser/trace.o 00:06:02.964 CC lib/util/dif.o 00:06:02.964 CC lib/util/fd.o 00:06:02.964 CC lib/util/fd_group.o 00:06:02.964 CC lib/util/file.o 00:06:02.964 CC lib/util/hexlify.o 00:06:02.964 CC lib/util/iov.o 00:06:02.964 CC lib/util/math.o 00:06:02.964 CC lib/util/net.o 00:06:02.964 CC lib/util/pipe.o 00:06:02.964 CC lib/util/strerror_tls.o 00:06:02.964 CC lib/util/string.o 00:06:02.964 CC lib/util/uuid.o 00:06:02.964 CC lib/util/xor.o 00:06:02.964 CC lib/util/zipf.o 00:06:02.964 CC lib/util/md5.o 00:06:02.964 CC lib/vfio_user/host/vfio_user_pci.o 00:06:02.964 CC lib/vfio_user/host/vfio_user.o 00:06:02.964 LIB libspdk_dma.a 00:06:03.225 SO libspdk_dma.so.5.0 00:06:03.225 LIB libspdk_ioat.a 00:06:03.225 SYMLINK libspdk_dma.so 00:06:03.225 SO libspdk_ioat.so.7.0 00:06:03.225 SYMLINK libspdk_ioat.so 00:06:03.225 LIB libspdk_vfio_user.a 00:06:03.225 SO libspdk_vfio_user.so.5.0 00:06:03.485 LIB libspdk_util.a 00:06:03.485 SYMLINK libspdk_vfio_user.so 00:06:03.485 SO libspdk_util.so.10.1 00:06:03.485 SYMLINK libspdk_util.so 00:06:03.787 LIB libspdk_trace_parser.a 00:06:03.787 SO libspdk_trace_parser.so.6.0 00:06:03.787 SYMLINK libspdk_trace_parser.so 00:06:04.069 CC lib/conf/conf.o 00:06:04.069 CC lib/rdma_utils/rdma_utils.o 00:06:04.069 CC lib/vmd/vmd.o 00:06:04.069 CC lib/json/json_parse.o 00:06:04.069 CC lib/vmd/led.o 00:06:04.069 CC lib/json/json_util.o 00:06:04.069 CC lib/json/json_write.o 00:06:04.069 CC lib/idxd/idxd.o 00:06:04.069 CC lib/env_dpdk/env.o 00:06:04.069 CC lib/idxd/idxd_user.o 00:06:04.069 CC lib/env_dpdk/memory.o 00:06:04.069 CC lib/idxd/idxd_kernel.o 00:06:04.069 CC lib/env_dpdk/pci.o 00:06:04.069 CC lib/env_dpdk/init.o 00:06:04.069 CC lib/env_dpdk/threads.o 00:06:04.069 CC lib/env_dpdk/pci_ioat.o 00:06:04.069 CC lib/env_dpdk/pci_virtio.o 00:06:04.069 CC lib/env_dpdk/pci_vmd.o 00:06:04.069 CC lib/env_dpdk/pci_idxd.o 00:06:04.069 CC lib/env_dpdk/pci_event.o 00:06:04.069 CC lib/env_dpdk/sigbus_handler.o 00:06:04.069 CC lib/env_dpdk/pci_dpdk.o 00:06:04.069 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:04.069 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:04.329 LIB libspdk_conf.a 00:06:04.329 LIB libspdk_rdma_utils.a 00:06:04.329 SO libspdk_conf.so.6.0 00:06:04.329 SO libspdk_rdma_utils.so.1.0 00:06:04.329 LIB libspdk_json.a 00:06:04.329 SYMLINK libspdk_rdma_utils.so 00:06:04.329 SYMLINK libspdk_conf.so 00:06:04.329 SO libspdk_json.so.6.0 00:06:04.329 SYMLINK libspdk_json.so 00:06:04.590 LIB libspdk_idxd.a 00:06:04.590 SO libspdk_idxd.so.12.1 00:06:04.590 LIB libspdk_vmd.a 00:06:04.590 SO libspdk_vmd.so.6.0 00:06:04.590 SYMLINK libspdk_idxd.so 00:06:04.590 CC lib/rdma_provider/common.o 00:06:04.590 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:04.590 SYMLINK libspdk_vmd.so 00:06:04.851 CC lib/jsonrpc/jsonrpc_server.o 00:06:04.851 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:04.851 CC lib/jsonrpc/jsonrpc_client.o 00:06:04.851 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:04.851 LIB libspdk_rdma_provider.a 00:06:04.851 SO libspdk_rdma_provider.so.7.0 00:06:05.112 LIB libspdk_jsonrpc.a 00:06:05.112 SYMLINK libspdk_rdma_provider.so 00:06:05.112 SO libspdk_jsonrpc.so.6.0 00:06:05.112 SYMLINK libspdk_jsonrpc.so 00:06:05.112 LIB libspdk_env_dpdk.a 00:06:05.372 SO libspdk_env_dpdk.so.15.1 00:06:05.372 SYMLINK libspdk_env_dpdk.so 00:06:05.633 CC lib/rpc/rpc.o 00:06:05.633 LIB libspdk_rpc.a 00:06:05.894 SO libspdk_rpc.so.6.0 00:06:05.894 SYMLINK libspdk_rpc.so 00:06:06.154 CC lib/trace/trace.o 00:06:06.154 CC lib/trace/trace_flags.o 00:06:06.154 CC lib/trace/trace_rpc.o 00:06:06.154 CC lib/notify/notify.o 00:06:06.154 CC lib/notify/notify_rpc.o 00:06:06.154 CC lib/keyring/keyring.o 00:06:06.154 CC lib/keyring/keyring_rpc.o 00:06:06.414 LIB libspdk_notify.a 00:06:06.414 SO libspdk_notify.so.6.0 00:06:06.414 LIB libspdk_trace.a 00:06:06.414 LIB libspdk_keyring.a 00:06:06.414 SO libspdk_trace.so.11.0 00:06:06.414 SO libspdk_keyring.so.2.0 00:06:06.414 SYMLINK libspdk_notify.so 00:06:06.675 SYMLINK libspdk_keyring.so 00:06:06.675 SYMLINK libspdk_trace.so 00:06:06.935 CC lib/thread/thread.o 00:06:06.935 CC lib/sock/sock.o 00:06:06.935 CC lib/thread/iobuf.o 00:06:06.935 CC lib/sock/sock_rpc.o 00:06:07.507 LIB libspdk_sock.a 00:06:07.507 SO libspdk_sock.so.10.0 00:06:07.507 SYMLINK libspdk_sock.so 00:06:07.766 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:07.766 CC lib/nvme/nvme_ctrlr.o 00:06:07.766 CC lib/nvme/nvme_fabric.o 00:06:07.766 CC lib/nvme/nvme_ns_cmd.o 00:06:07.766 CC lib/nvme/nvme_ns.o 00:06:07.766 CC lib/nvme/nvme_pcie_common.o 00:06:07.766 CC lib/nvme/nvme_pcie.o 00:06:07.766 CC lib/nvme/nvme_qpair.o 00:06:07.766 CC lib/nvme/nvme.o 00:06:07.766 CC lib/nvme/nvme_quirks.o 00:06:07.766 CC lib/nvme/nvme_transport.o 00:06:07.766 CC lib/nvme/nvme_discovery.o 00:06:07.766 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:07.766 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:07.766 CC lib/nvme/nvme_tcp.o 00:06:07.766 CC lib/nvme/nvme_opal.o 00:06:07.766 CC lib/nvme/nvme_io_msg.o 00:06:07.766 CC lib/nvme/nvme_poll_group.o 00:06:07.766 CC lib/nvme/nvme_zns.o 00:06:07.766 CC lib/nvme/nvme_stubs.o 00:06:07.766 CC lib/nvme/nvme_auth.o 00:06:07.766 CC lib/nvme/nvme_cuse.o 00:06:07.766 CC lib/nvme/nvme_vfio_user.o 00:06:07.766 CC lib/nvme/nvme_rdma.o 00:06:08.334 LIB libspdk_thread.a 00:06:08.334 SO libspdk_thread.so.11.0 00:06:08.334 SYMLINK libspdk_thread.so 00:06:08.593 CC lib/init/subsystem.o 00:06:08.593 CC lib/init/json_config.o 00:06:08.593 CC lib/init/subsystem_rpc.o 00:06:08.593 CC lib/init/rpc.o 00:06:08.593 CC lib/vfu_tgt/tgt_endpoint.o 00:06:08.593 CC lib/vfu_tgt/tgt_rpc.o 00:06:08.593 CC lib/virtio/virtio.o 00:06:08.593 CC lib/blob/blobstore.o 00:06:08.593 CC lib/blob/request.o 00:06:08.593 CC lib/virtio/virtio_vhost_user.o 00:06:08.593 CC lib/accel/accel.o 00:06:08.593 CC lib/virtio/virtio_vfio_user.o 00:06:08.593 CC lib/blob/zeroes.o 00:06:08.593 CC lib/accel/accel_rpc.o 00:06:08.593 CC lib/virtio/virtio_pci.o 00:06:08.593 CC lib/accel/accel_sw.o 00:06:08.593 CC lib/blob/blob_bs_dev.o 00:06:08.593 CC lib/fsdev/fsdev.o 00:06:08.593 CC lib/fsdev/fsdev_io.o 00:06:08.593 CC lib/fsdev/fsdev_rpc.o 00:06:08.853 LIB libspdk_init.a 00:06:09.113 SO libspdk_init.so.6.0 00:06:09.113 LIB libspdk_vfu_tgt.a 00:06:09.113 LIB libspdk_virtio.a 00:06:09.113 SYMLINK libspdk_init.so 00:06:09.113 SO libspdk_vfu_tgt.so.3.0 00:06:09.113 SO libspdk_virtio.so.7.0 00:06:09.113 SYMLINK libspdk_vfu_tgt.so 00:06:09.113 SYMLINK libspdk_virtio.so 00:06:09.375 LIB libspdk_fsdev.a 00:06:09.375 SO libspdk_fsdev.so.2.0 00:06:09.375 CC lib/event/app.o 00:06:09.375 CC lib/event/reactor.o 00:06:09.375 CC lib/event/log_rpc.o 00:06:09.375 CC lib/event/app_rpc.o 00:06:09.375 CC lib/event/scheduler_static.o 00:06:09.375 SYMLINK libspdk_fsdev.so 00:06:09.635 LIB libspdk_accel.a 00:06:09.635 LIB libspdk_nvme.a 00:06:09.635 SO libspdk_accel.so.16.0 00:06:09.896 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:09.896 SYMLINK libspdk_accel.so 00:06:09.896 SO libspdk_nvme.so.15.0 00:06:09.896 LIB libspdk_event.a 00:06:09.896 SO libspdk_event.so.14.0 00:06:09.896 SYMLINK libspdk_event.so 00:06:10.157 SYMLINK libspdk_nvme.so 00:06:10.157 CC lib/bdev/bdev.o 00:06:10.157 CC lib/bdev/bdev_rpc.o 00:06:10.157 CC lib/bdev/bdev_zone.o 00:06:10.157 CC lib/bdev/part.o 00:06:10.157 CC lib/bdev/scsi_nvme.o 00:06:10.418 LIB libspdk_fuse_dispatcher.a 00:06:10.418 SO libspdk_fuse_dispatcher.so.1.0 00:06:10.418 SYMLINK libspdk_fuse_dispatcher.so 00:06:11.362 LIB libspdk_blob.a 00:06:11.362 SO libspdk_blob.so.11.0 00:06:11.362 SYMLINK libspdk_blob.so 00:06:11.933 CC lib/blobfs/blobfs.o 00:06:11.933 CC lib/blobfs/tree.o 00:06:11.933 CC lib/lvol/lvol.o 00:06:12.505 LIB libspdk_bdev.a 00:06:12.505 LIB libspdk_blobfs.a 00:06:12.505 SO libspdk_blobfs.so.10.0 00:06:12.505 SO libspdk_bdev.so.17.0 00:06:12.505 LIB libspdk_lvol.a 00:06:12.505 SO libspdk_lvol.so.10.0 00:06:12.505 SYMLINK libspdk_blobfs.so 00:06:12.766 SYMLINK libspdk_bdev.so 00:06:12.766 SYMLINK libspdk_lvol.so 00:06:13.026 CC lib/nbd/nbd.o 00:06:13.026 CC lib/nbd/nbd_rpc.o 00:06:13.026 CC lib/ftl/ftl_core.o 00:06:13.026 CC lib/ftl/ftl_init.o 00:06:13.026 CC lib/ublk/ublk.o 00:06:13.026 CC lib/nvmf/ctrlr.o 00:06:13.026 CC lib/scsi/dev.o 00:06:13.026 CC lib/ftl/ftl_layout.o 00:06:13.026 CC lib/ublk/ublk_rpc.o 00:06:13.026 CC lib/nvmf/ctrlr_discovery.o 00:06:13.026 CC lib/scsi/lun.o 00:06:13.026 CC lib/ftl/ftl_debug.o 00:06:13.026 CC lib/scsi/port.o 00:06:13.026 CC lib/nvmf/ctrlr_bdev.o 00:06:13.026 CC lib/ftl/ftl_io.o 00:06:13.026 CC lib/scsi/scsi.o 00:06:13.026 CC lib/nvmf/subsystem.o 00:06:13.026 CC lib/ftl/ftl_sb.o 00:06:13.026 CC lib/scsi/scsi_bdev.o 00:06:13.026 CC lib/nvmf/nvmf.o 00:06:13.026 CC lib/ftl/ftl_l2p.o 00:06:13.026 CC lib/nvmf/nvmf_rpc.o 00:06:13.026 CC lib/ftl/ftl_l2p_flat.o 00:06:13.026 CC lib/scsi/scsi_pr.o 00:06:13.026 CC lib/scsi/scsi_rpc.o 00:06:13.026 CC lib/nvmf/transport.o 00:06:13.026 CC lib/ftl/ftl_nv_cache.o 00:06:13.026 CC lib/scsi/task.o 00:06:13.026 CC lib/nvmf/tcp.o 00:06:13.026 CC lib/ftl/ftl_band.o 00:06:13.026 CC lib/ftl/ftl_band_ops.o 00:06:13.026 CC lib/nvmf/stubs.o 00:06:13.026 CC lib/ftl/ftl_writer.o 00:06:13.026 CC lib/nvmf/mdns_server.o 00:06:13.026 CC lib/ftl/ftl_rq.o 00:06:13.026 CC lib/nvmf/vfio_user.o 00:06:13.026 CC lib/ftl/ftl_reloc.o 00:06:13.027 CC lib/nvmf/rdma.o 00:06:13.027 CC lib/ftl/ftl_l2p_cache.o 00:06:13.027 CC lib/nvmf/auth.o 00:06:13.027 CC lib/ftl/ftl_p2l.o 00:06:13.027 CC lib/ftl/ftl_p2l_log.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:13.027 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:13.027 CC lib/ftl/utils/ftl_conf.o 00:06:13.027 CC lib/ftl/utils/ftl_md.o 00:06:13.027 CC lib/ftl/utils/ftl_mempool.o 00:06:13.027 CC lib/ftl/utils/ftl_bitmap.o 00:06:13.027 CC lib/ftl/utils/ftl_property.o 00:06:13.027 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:13.027 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:13.027 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:13.027 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:13.027 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:13.027 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:13.027 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:13.027 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:13.027 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:13.027 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:13.027 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:13.027 CC lib/ftl/base/ftl_base_dev.o 00:06:13.027 CC lib/ftl/base/ftl_base_bdev.o 00:06:13.027 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:13.027 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:13.027 CC lib/ftl/ftl_trace.o 00:06:13.597 LIB libspdk_nbd.a 00:06:13.597 SO libspdk_nbd.so.7.0 00:06:13.597 SYMLINK libspdk_nbd.so 00:06:13.597 LIB libspdk_scsi.a 00:06:13.597 SO libspdk_scsi.so.9.0 00:06:13.858 LIB libspdk_ublk.a 00:06:13.858 SYMLINK libspdk_scsi.so 00:06:13.858 SO libspdk_ublk.so.3.0 00:06:13.858 SYMLINK libspdk_ublk.so 00:06:14.120 CC lib/iscsi/conn.o 00:06:14.120 CC lib/iscsi/init_grp.o 00:06:14.120 LIB libspdk_ftl.a 00:06:14.120 CC lib/iscsi/iscsi.o 00:06:14.120 CC lib/iscsi/param.o 00:06:14.120 CC lib/iscsi/portal_grp.o 00:06:14.120 CC lib/iscsi/iscsi_rpc.o 00:06:14.120 CC lib/iscsi/tgt_node.o 00:06:14.120 CC lib/iscsi/iscsi_subsystem.o 00:06:14.120 CC lib/iscsi/task.o 00:06:14.120 CC lib/vhost/vhost.o 00:06:14.120 CC lib/vhost/vhost_rpc.o 00:06:14.120 CC lib/vhost/vhost_blk.o 00:06:14.120 CC lib/vhost/vhost_scsi.o 00:06:14.120 CC lib/vhost/rte_vhost_user.o 00:06:14.381 SO libspdk_ftl.so.9.0 00:06:14.642 SYMLINK libspdk_ftl.so 00:06:14.905 LIB libspdk_nvmf.a 00:06:14.905 SO libspdk_nvmf.so.20.0 00:06:15.166 LIB libspdk_vhost.a 00:06:15.166 SO libspdk_vhost.so.8.0 00:06:15.166 SYMLINK libspdk_nvmf.so 00:06:15.166 SYMLINK libspdk_vhost.so 00:06:15.428 LIB libspdk_iscsi.a 00:06:15.428 SO libspdk_iscsi.so.8.0 00:06:15.690 SYMLINK libspdk_iscsi.so 00:06:16.262 CC module/env_dpdk/env_dpdk_rpc.o 00:06:16.262 CC module/vfu_device/vfu_virtio.o 00:06:16.262 CC module/vfu_device/vfu_virtio_blk.o 00:06:16.262 CC module/vfu_device/vfu_virtio_scsi.o 00:06:16.262 CC module/vfu_device/vfu_virtio_rpc.o 00:06:16.262 CC module/vfu_device/vfu_virtio_fs.o 00:06:16.262 CC module/accel/iaa/accel_iaa.o 00:06:16.262 LIB libspdk_env_dpdk_rpc.a 00:06:16.262 CC module/accel/iaa/accel_iaa_rpc.o 00:06:16.262 CC module/sock/posix/posix.o 00:06:16.262 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:16.262 CC module/blob/bdev/blob_bdev.o 00:06:16.262 CC module/scheduler/gscheduler/gscheduler.o 00:06:16.262 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:16.262 CC module/accel/ioat/accel_ioat.o 00:06:16.262 CC module/keyring/linux/keyring.o 00:06:16.262 CC module/accel/ioat/accel_ioat_rpc.o 00:06:16.262 CC module/keyring/linux/keyring_rpc.o 00:06:16.262 CC module/accel/error/accel_error.o 00:06:16.262 CC module/accel/dsa/accel_dsa.o 00:06:16.262 CC module/accel/error/accel_error_rpc.o 00:06:16.262 CC module/fsdev/aio/fsdev_aio.o 00:06:16.262 CC module/accel/dsa/accel_dsa_rpc.o 00:06:16.262 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:16.262 CC module/fsdev/aio/linux_aio_mgr.o 00:06:16.262 CC module/keyring/file/keyring.o 00:06:16.262 CC module/keyring/file/keyring_rpc.o 00:06:16.262 SO libspdk_env_dpdk_rpc.so.6.0 00:06:16.524 SYMLINK libspdk_env_dpdk_rpc.so 00:06:16.524 LIB libspdk_keyring_linux.a 00:06:16.524 LIB libspdk_scheduler_dpdk_governor.a 00:06:16.524 LIB libspdk_scheduler_gscheduler.a 00:06:16.524 LIB libspdk_keyring_file.a 00:06:16.524 LIB libspdk_accel_ioat.a 00:06:16.524 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:16.524 SO libspdk_keyring_linux.so.1.0 00:06:16.524 SO libspdk_scheduler_gscheduler.so.4.0 00:06:16.524 LIB libspdk_scheduler_dynamic.a 00:06:16.524 SO libspdk_accel_ioat.so.6.0 00:06:16.524 LIB libspdk_accel_error.a 00:06:16.524 LIB libspdk_accel_iaa.a 00:06:16.524 SO libspdk_keyring_file.so.2.0 00:06:16.524 SO libspdk_scheduler_dynamic.so.4.0 00:06:16.524 SO libspdk_accel_error.so.2.0 00:06:16.524 SYMLINK libspdk_scheduler_gscheduler.so 00:06:16.524 SO libspdk_accel_iaa.so.3.0 00:06:16.786 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:16.786 SYMLINK libspdk_keyring_linux.so 00:06:16.786 LIB libspdk_blob_bdev.a 00:06:16.786 SYMLINK libspdk_keyring_file.so 00:06:16.786 SYMLINK libspdk_accel_ioat.so 00:06:16.786 LIB libspdk_accel_dsa.a 00:06:16.786 SYMLINK libspdk_scheduler_dynamic.so 00:06:16.786 SYMLINK libspdk_accel_error.so 00:06:16.786 SO libspdk_blob_bdev.so.11.0 00:06:16.786 SO libspdk_accel_dsa.so.5.0 00:06:16.786 SYMLINK libspdk_accel_iaa.so 00:06:16.786 SYMLINK libspdk_blob_bdev.so 00:06:16.786 LIB libspdk_vfu_device.a 00:06:16.786 SYMLINK libspdk_accel_dsa.so 00:06:16.786 SO libspdk_vfu_device.so.3.0 00:06:17.047 SYMLINK libspdk_vfu_device.so 00:06:17.047 LIB libspdk_fsdev_aio.a 00:06:17.048 SO libspdk_fsdev_aio.so.1.0 00:06:17.048 LIB libspdk_sock_posix.a 00:06:17.048 SO libspdk_sock_posix.so.6.0 00:06:17.048 SYMLINK libspdk_fsdev_aio.so 00:06:17.309 SYMLINK libspdk_sock_posix.so 00:06:17.309 CC module/bdev/gpt/gpt.o 00:06:17.309 CC module/bdev/gpt/vbdev_gpt.o 00:06:17.309 CC module/blobfs/bdev/blobfs_bdev.o 00:06:17.309 CC module/bdev/delay/vbdev_delay.o 00:06:17.309 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:17.309 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:17.309 CC module/bdev/split/vbdev_split.o 00:06:17.310 CC module/bdev/error/vbdev_error.o 00:06:17.310 CC module/bdev/null/bdev_null.o 00:06:17.310 CC module/bdev/null/bdev_null_rpc.o 00:06:17.310 CC module/bdev/split/vbdev_split_rpc.o 00:06:17.310 CC module/bdev/error/vbdev_error_rpc.o 00:06:17.310 CC module/bdev/aio/bdev_aio.o 00:06:17.310 CC module/bdev/raid/bdev_raid.o 00:06:17.310 CC module/bdev/malloc/bdev_malloc.o 00:06:17.310 CC module/bdev/lvol/vbdev_lvol.o 00:06:17.310 CC module/bdev/aio/bdev_aio_rpc.o 00:06:17.310 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:17.310 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:17.310 CC module/bdev/raid/bdev_raid_rpc.o 00:06:17.310 CC module/bdev/passthru/vbdev_passthru.o 00:06:17.310 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:17.310 CC module/bdev/raid/bdev_raid_sb.o 00:06:17.310 CC module/bdev/raid/raid0.o 00:06:17.310 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:17.310 CC module/bdev/raid/raid1.o 00:06:17.310 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:17.310 CC module/bdev/iscsi/bdev_iscsi.o 00:06:17.310 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:17.310 CC module/bdev/raid/concat.o 00:06:17.310 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:17.310 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:17.310 CC module/bdev/nvme/bdev_nvme.o 00:06:17.310 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:17.310 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:17.310 CC module/bdev/nvme/nvme_rpc.o 00:06:17.310 CC module/bdev/nvme/bdev_mdns_client.o 00:06:17.310 CC module/bdev/ftl/bdev_ftl.o 00:06:17.310 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:17.310 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:17.310 CC module/bdev/nvme/vbdev_opal.o 00:06:17.310 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:17.570 LIB libspdk_blobfs_bdev.a 00:06:17.570 SO libspdk_blobfs_bdev.so.6.0 00:06:17.570 LIB libspdk_bdev_null.a 00:06:17.570 LIB libspdk_bdev_split.a 00:06:17.831 LIB libspdk_bdev_gpt.a 00:06:17.831 SO libspdk_bdev_split.so.6.0 00:06:17.831 SO libspdk_bdev_null.so.6.0 00:06:17.831 SYMLINK libspdk_blobfs_bdev.so 00:06:17.831 LIB libspdk_bdev_error.a 00:06:17.831 SO libspdk_bdev_gpt.so.6.0 00:06:17.831 LIB libspdk_bdev_ftl.a 00:06:17.831 LIB libspdk_bdev_passthru.a 00:06:17.831 SYMLINK libspdk_bdev_split.so 00:06:17.831 SO libspdk_bdev_error.so.6.0 00:06:17.831 SYMLINK libspdk_bdev_null.so 00:06:17.831 SO libspdk_bdev_ftl.so.6.0 00:06:17.831 SO libspdk_bdev_passthru.so.6.0 00:06:17.831 LIB libspdk_bdev_zone_block.a 00:06:17.831 LIB libspdk_bdev_malloc.a 00:06:17.831 LIB libspdk_bdev_aio.a 00:06:17.831 LIB libspdk_bdev_delay.a 00:06:17.831 SYMLINK libspdk_bdev_gpt.so 00:06:17.831 LIB libspdk_bdev_iscsi.a 00:06:17.831 SO libspdk_bdev_zone_block.so.6.0 00:06:17.831 SYMLINK libspdk_bdev_error.so 00:06:17.831 SO libspdk_bdev_aio.so.6.0 00:06:17.831 SO libspdk_bdev_malloc.so.6.0 00:06:17.831 SO libspdk_bdev_delay.so.6.0 00:06:17.831 SO libspdk_bdev_iscsi.so.6.0 00:06:17.831 SYMLINK libspdk_bdev_passthru.so 00:06:17.831 SYMLINK libspdk_bdev_ftl.so 00:06:17.831 SYMLINK libspdk_bdev_zone_block.so 00:06:17.831 SYMLINK libspdk_bdev_aio.so 00:06:17.831 SYMLINK libspdk_bdev_malloc.so 00:06:17.831 LIB libspdk_bdev_lvol.a 00:06:17.831 SYMLINK libspdk_bdev_delay.so 00:06:17.831 SYMLINK libspdk_bdev_iscsi.so 00:06:18.092 LIB libspdk_bdev_virtio.a 00:06:18.092 SO libspdk_bdev_lvol.so.6.0 00:06:18.092 SO libspdk_bdev_virtio.so.6.0 00:06:18.092 SYMLINK libspdk_bdev_lvol.so 00:06:18.092 SYMLINK libspdk_bdev_virtio.so 00:06:18.352 LIB libspdk_bdev_raid.a 00:06:18.352 SO libspdk_bdev_raid.so.6.0 00:06:18.612 SYMLINK libspdk_bdev_raid.so 00:06:19.556 LIB libspdk_bdev_nvme.a 00:06:19.818 SO libspdk_bdev_nvme.so.7.1 00:06:19.818 SYMLINK libspdk_bdev_nvme.so 00:06:20.760 CC module/event/subsystems/vmd/vmd.o 00:06:20.760 CC module/event/subsystems/keyring/keyring.o 00:06:20.760 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:20.760 CC module/event/subsystems/iobuf/iobuf.o 00:06:20.760 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:20.760 CC module/event/subsystems/sock/sock.o 00:06:20.760 CC module/event/subsystems/scheduler/scheduler.o 00:06:20.760 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:20.760 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:20.760 CC module/event/subsystems/fsdev/fsdev.o 00:06:20.760 LIB libspdk_event_keyring.a 00:06:20.760 LIB libspdk_event_vfu_tgt.a 00:06:20.760 LIB libspdk_event_vmd.a 00:06:20.760 LIB libspdk_event_vhost_blk.a 00:06:20.760 LIB libspdk_event_scheduler.a 00:06:20.760 LIB libspdk_event_sock.a 00:06:20.760 LIB libspdk_event_fsdev.a 00:06:20.760 LIB libspdk_event_iobuf.a 00:06:20.760 SO libspdk_event_keyring.so.1.0 00:06:20.760 SO libspdk_event_vfu_tgt.so.3.0 00:06:20.760 SO libspdk_event_scheduler.so.4.0 00:06:20.760 SO libspdk_event_vhost_blk.so.3.0 00:06:20.760 SO libspdk_event_sock.so.5.0 00:06:20.760 SO libspdk_event_vmd.so.6.0 00:06:20.760 SO libspdk_event_fsdev.so.1.0 00:06:20.760 SO libspdk_event_iobuf.so.3.0 00:06:20.760 SYMLINK libspdk_event_keyring.so 00:06:20.760 SYMLINK libspdk_event_vfu_tgt.so 00:06:20.760 SYMLINK libspdk_event_scheduler.so 00:06:20.760 SYMLINK libspdk_event_sock.so 00:06:20.760 SYMLINK libspdk_event_vhost_blk.so 00:06:20.760 SYMLINK libspdk_event_fsdev.so 00:06:20.760 SYMLINK libspdk_event_vmd.so 00:06:20.760 SYMLINK libspdk_event_iobuf.so 00:06:21.333 CC module/event/subsystems/accel/accel.o 00:06:21.333 LIB libspdk_event_accel.a 00:06:21.333 SO libspdk_event_accel.so.6.0 00:06:21.594 SYMLINK libspdk_event_accel.so 00:06:21.855 CC module/event/subsystems/bdev/bdev.o 00:06:22.114 LIB libspdk_event_bdev.a 00:06:22.114 SO libspdk_event_bdev.so.6.0 00:06:22.114 SYMLINK libspdk_event_bdev.so 00:06:22.686 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:22.686 CC module/event/subsystems/scsi/scsi.o 00:06:22.686 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:22.686 CC module/event/subsystems/nbd/nbd.o 00:06:22.686 CC module/event/subsystems/ublk/ublk.o 00:06:22.686 LIB libspdk_event_nbd.a 00:06:22.686 LIB libspdk_event_ublk.a 00:06:22.686 LIB libspdk_event_scsi.a 00:06:22.686 SO libspdk_event_nbd.so.6.0 00:06:22.686 SO libspdk_event_ublk.so.3.0 00:06:22.686 SO libspdk_event_scsi.so.6.0 00:06:22.686 LIB libspdk_event_nvmf.a 00:06:22.686 SYMLINK libspdk_event_nbd.so 00:06:22.686 SYMLINK libspdk_event_ublk.so 00:06:22.686 SO libspdk_event_nvmf.so.6.0 00:06:22.686 SYMLINK libspdk_event_scsi.so 00:06:22.948 SYMLINK libspdk_event_nvmf.so 00:06:23.208 CC module/event/subsystems/iscsi/iscsi.o 00:06:23.208 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:23.208 LIB libspdk_event_vhost_scsi.a 00:06:23.469 LIB libspdk_event_iscsi.a 00:06:23.469 SO libspdk_event_vhost_scsi.so.3.0 00:06:23.469 SO libspdk_event_iscsi.so.6.0 00:06:23.469 SYMLINK libspdk_event_vhost_scsi.so 00:06:23.469 SYMLINK libspdk_event_iscsi.so 00:06:23.730 SO libspdk.so.6.0 00:06:23.730 SYMLINK libspdk.so 00:06:23.992 CC app/trace_record/trace_record.o 00:06:23.992 CXX app/trace/trace.o 00:06:23.992 CC app/spdk_top/spdk_top.o 00:06:23.992 CC app/spdk_lspci/spdk_lspci.o 00:06:23.992 TEST_HEADER include/spdk/accel.h 00:06:23.992 CC app/spdk_nvme_perf/perf.o 00:06:23.992 CC app/spdk_nvme_identify/identify.o 00:06:23.992 TEST_HEADER include/spdk/accel_module.h 00:06:23.992 CC test/rpc_client/rpc_client_test.o 00:06:23.992 TEST_HEADER include/spdk/assert.h 00:06:23.992 CC app/spdk_nvme_discover/discovery_aer.o 00:06:23.992 TEST_HEADER include/spdk/barrier.h 00:06:23.992 TEST_HEADER include/spdk/base64.h 00:06:23.992 TEST_HEADER include/spdk/bdev.h 00:06:23.992 TEST_HEADER include/spdk/bdev_module.h 00:06:23.992 TEST_HEADER include/spdk/bdev_zone.h 00:06:23.992 TEST_HEADER include/spdk/bit_array.h 00:06:23.992 TEST_HEADER include/spdk/bit_pool.h 00:06:23.992 TEST_HEADER include/spdk/blob_bdev.h 00:06:23.992 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:23.992 TEST_HEADER include/spdk/blobfs.h 00:06:23.992 TEST_HEADER include/spdk/conf.h 00:06:23.992 TEST_HEADER include/spdk/blob.h 00:06:23.992 TEST_HEADER include/spdk/config.h 00:06:23.992 TEST_HEADER include/spdk/cpuset.h 00:06:23.992 TEST_HEADER include/spdk/crc16.h 00:06:23.992 TEST_HEADER include/spdk/crc64.h 00:06:23.992 TEST_HEADER include/spdk/crc32.h 00:06:23.992 TEST_HEADER include/spdk/dif.h 00:06:23.992 TEST_HEADER include/spdk/dma.h 00:06:23.992 TEST_HEADER include/spdk/endian.h 00:06:23.992 TEST_HEADER include/spdk/env.h 00:06:23.992 TEST_HEADER include/spdk/env_dpdk.h 00:06:23.992 TEST_HEADER include/spdk/fd_group.h 00:06:23.992 TEST_HEADER include/spdk/event.h 00:06:24.259 TEST_HEADER include/spdk/fd.h 00:06:24.259 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:24.259 TEST_HEADER include/spdk/fsdev.h 00:06:24.259 TEST_HEADER include/spdk/file.h 00:06:24.259 TEST_HEADER include/spdk/fsdev_module.h 00:06:24.259 TEST_HEADER include/spdk/ftl.h 00:06:24.259 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:24.259 TEST_HEADER include/spdk/gpt_spec.h 00:06:24.259 CC app/nvmf_tgt/nvmf_main.o 00:06:24.259 CC app/spdk_dd/spdk_dd.o 00:06:24.259 TEST_HEADER include/spdk/hexlify.h 00:06:24.259 CC app/iscsi_tgt/iscsi_tgt.o 00:06:24.259 TEST_HEADER include/spdk/histogram_data.h 00:06:24.259 TEST_HEADER include/spdk/idxd.h 00:06:24.259 TEST_HEADER include/spdk/init.h 00:06:24.259 TEST_HEADER include/spdk/idxd_spec.h 00:06:24.259 TEST_HEADER include/spdk/ioat.h 00:06:24.259 TEST_HEADER include/spdk/ioat_spec.h 00:06:24.259 TEST_HEADER include/spdk/json.h 00:06:24.259 TEST_HEADER include/spdk/iscsi_spec.h 00:06:24.259 TEST_HEADER include/spdk/keyring.h 00:06:24.259 TEST_HEADER include/spdk/jsonrpc.h 00:06:24.259 TEST_HEADER include/spdk/keyring_module.h 00:06:24.259 TEST_HEADER include/spdk/likely.h 00:06:24.259 TEST_HEADER include/spdk/lvol.h 00:06:24.259 TEST_HEADER include/spdk/log.h 00:06:24.259 TEST_HEADER include/spdk/memory.h 00:06:24.259 TEST_HEADER include/spdk/md5.h 00:06:24.259 TEST_HEADER include/spdk/mmio.h 00:06:24.259 TEST_HEADER include/spdk/nbd.h 00:06:24.259 CC app/spdk_tgt/spdk_tgt.o 00:06:24.259 TEST_HEADER include/spdk/net.h 00:06:24.259 TEST_HEADER include/spdk/notify.h 00:06:24.259 TEST_HEADER include/spdk/nvme.h 00:06:24.259 TEST_HEADER include/spdk/nvme_intel.h 00:06:24.259 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:24.259 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:24.259 TEST_HEADER include/spdk/nvme_spec.h 00:06:24.259 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:24.259 TEST_HEADER include/spdk/nvme_zns.h 00:06:24.259 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:24.259 TEST_HEADER include/spdk/nvmf.h 00:06:24.259 TEST_HEADER include/spdk/nvmf_transport.h 00:06:24.259 TEST_HEADER include/spdk/nvmf_spec.h 00:06:24.259 TEST_HEADER include/spdk/opal.h 00:06:24.259 TEST_HEADER include/spdk/opal_spec.h 00:06:24.259 TEST_HEADER include/spdk/pci_ids.h 00:06:24.259 TEST_HEADER include/spdk/pipe.h 00:06:24.259 TEST_HEADER include/spdk/queue.h 00:06:24.259 TEST_HEADER include/spdk/scheduler.h 00:06:24.259 TEST_HEADER include/spdk/reduce.h 00:06:24.259 TEST_HEADER include/spdk/rpc.h 00:06:24.259 TEST_HEADER include/spdk/scsi.h 00:06:24.259 TEST_HEADER include/spdk/scsi_spec.h 00:06:24.259 TEST_HEADER include/spdk/sock.h 00:06:24.259 TEST_HEADER include/spdk/stdinc.h 00:06:24.259 TEST_HEADER include/spdk/string.h 00:06:24.259 TEST_HEADER include/spdk/thread.h 00:06:24.259 TEST_HEADER include/spdk/trace.h 00:06:24.259 TEST_HEADER include/spdk/trace_parser.h 00:06:24.259 TEST_HEADER include/spdk/tree.h 00:06:24.259 TEST_HEADER include/spdk/ublk.h 00:06:24.259 TEST_HEADER include/spdk/util.h 00:06:24.259 TEST_HEADER include/spdk/uuid.h 00:06:24.259 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:24.259 TEST_HEADER include/spdk/version.h 00:06:24.259 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:24.259 TEST_HEADER include/spdk/vhost.h 00:06:24.259 TEST_HEADER include/spdk/xor.h 00:06:24.259 TEST_HEADER include/spdk/vmd.h 00:06:24.259 TEST_HEADER include/spdk/zipf.h 00:06:24.259 CXX test/cpp_headers/accel.o 00:06:24.259 CXX test/cpp_headers/accel_module.o 00:06:24.259 CXX test/cpp_headers/assert.o 00:06:24.259 CXX test/cpp_headers/barrier.o 00:06:24.259 CXX test/cpp_headers/base64.o 00:06:24.259 CXX test/cpp_headers/bdev_module.o 00:06:24.259 CXX test/cpp_headers/bdev.o 00:06:24.259 CXX test/cpp_headers/bdev_zone.o 00:06:24.259 CXX test/cpp_headers/bit_array.o 00:06:24.259 CXX test/cpp_headers/bit_pool.o 00:06:24.259 CXX test/cpp_headers/blob_bdev.o 00:06:24.259 CXX test/cpp_headers/blobfs_bdev.o 00:06:24.259 CXX test/cpp_headers/blob.o 00:06:24.259 CXX test/cpp_headers/blobfs.o 00:06:24.259 CXX test/cpp_headers/conf.o 00:06:24.259 CXX test/cpp_headers/config.o 00:06:24.259 CXX test/cpp_headers/cpuset.o 00:06:24.259 CXX test/cpp_headers/crc16.o 00:06:24.259 CXX test/cpp_headers/crc32.o 00:06:24.259 CXX test/cpp_headers/crc64.o 00:06:24.259 CXX test/cpp_headers/dif.o 00:06:24.259 CXX test/cpp_headers/dma.o 00:06:24.259 CXX test/cpp_headers/endian.o 00:06:24.259 CXX test/cpp_headers/env_dpdk.o 00:06:24.259 CXX test/cpp_headers/env.o 00:06:24.259 CXX test/cpp_headers/event.o 00:06:24.259 CXX test/cpp_headers/fd_group.o 00:06:24.259 CXX test/cpp_headers/fd.o 00:06:24.259 CXX test/cpp_headers/fsdev_module.o 00:06:24.259 CXX test/cpp_headers/file.o 00:06:24.259 CXX test/cpp_headers/ftl.o 00:06:24.259 CXX test/cpp_headers/fsdev.o 00:06:24.259 CXX test/cpp_headers/fuse_dispatcher.o 00:06:24.259 CXX test/cpp_headers/gpt_spec.o 00:06:24.259 CXX test/cpp_headers/histogram_data.o 00:06:24.259 CXX test/cpp_headers/idxd.o 00:06:24.259 CXX test/cpp_headers/hexlify.o 00:06:24.259 CXX test/cpp_headers/idxd_spec.o 00:06:24.259 CXX test/cpp_headers/ioat.o 00:06:24.259 CXX test/cpp_headers/init.o 00:06:24.259 CXX test/cpp_headers/ioat_spec.o 00:06:24.259 CXX test/cpp_headers/json.o 00:06:24.259 CXX test/cpp_headers/iscsi_spec.o 00:06:24.259 CXX test/cpp_headers/keyring.o 00:06:24.259 CXX test/cpp_headers/log.o 00:06:24.259 CXX test/cpp_headers/jsonrpc.o 00:06:24.259 CXX test/cpp_headers/md5.o 00:06:24.259 CXX test/cpp_headers/memory.o 00:06:24.259 CXX test/cpp_headers/likely.o 00:06:24.259 CXX test/cpp_headers/keyring_module.o 00:06:24.259 CXX test/cpp_headers/mmio.o 00:06:24.259 CXX test/cpp_headers/lvol.o 00:06:24.259 CXX test/cpp_headers/nbd.o 00:06:24.259 CXX test/cpp_headers/net.o 00:06:24.259 CXX test/cpp_headers/notify.o 00:06:24.259 CXX test/cpp_headers/nvme_intel.o 00:06:24.259 CXX test/cpp_headers/nvme_ocssd.o 00:06:24.259 CXX test/cpp_headers/nvme.o 00:06:24.259 CXX test/cpp_headers/nvme_spec.o 00:06:24.259 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:24.259 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:24.259 CC examples/util/zipf/zipf.o 00:06:24.259 CC examples/ioat/verify/verify.o 00:06:24.259 CXX test/cpp_headers/nvme_zns.o 00:06:24.259 CXX test/cpp_headers/nvmf_cmd.o 00:06:24.259 CXX test/cpp_headers/nvmf.o 00:06:24.259 CXX test/cpp_headers/nvmf_spec.o 00:06:24.259 CXX test/cpp_headers/opal.o 00:06:24.259 CXX test/cpp_headers/nvmf_transport.o 00:06:24.259 CC examples/ioat/perf/perf.o 00:06:24.259 CXX test/cpp_headers/pci_ids.o 00:06:24.259 CXX test/cpp_headers/opal_spec.o 00:06:24.259 CC test/thread/poller_perf/poller_perf.o 00:06:24.259 CXX test/cpp_headers/queue.o 00:06:24.259 CXX test/cpp_headers/pipe.o 00:06:24.259 LINK spdk_lspci 00:06:24.259 CC test/env/pci/pci_ut.o 00:06:24.259 CXX test/cpp_headers/reduce.o 00:06:24.259 CXX test/cpp_headers/scsi.o 00:06:24.259 CXX test/cpp_headers/rpc.o 00:06:24.259 CXX test/cpp_headers/scsi_spec.o 00:06:24.259 CXX test/cpp_headers/scheduler.o 00:06:24.259 CC test/app/stub/stub.o 00:06:24.259 CC test/app/jsoncat/jsoncat.o 00:06:24.259 CXX test/cpp_headers/sock.o 00:06:24.259 CXX test/cpp_headers/string.o 00:06:24.259 CXX test/cpp_headers/stdinc.o 00:06:24.259 CC test/env/memory/memory_ut.o 00:06:24.259 CC test/app/histogram_perf/histogram_perf.o 00:06:24.260 CXX test/cpp_headers/thread.o 00:06:24.260 CXX test/cpp_headers/trace.o 00:06:24.260 CC test/env/vtophys/vtophys.o 00:06:24.260 CXX test/cpp_headers/trace_parser.o 00:06:24.528 CXX test/cpp_headers/tree.o 00:06:24.528 CXX test/cpp_headers/ublk.o 00:06:24.528 CXX test/cpp_headers/uuid.o 00:06:24.528 CXX test/cpp_headers/util.o 00:06:24.528 CXX test/cpp_headers/version.o 00:06:24.528 CXX test/cpp_headers/vhost.o 00:06:24.528 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:24.528 CXX test/cpp_headers/vfio_user_pci.o 00:06:24.528 CXX test/cpp_headers/vfio_user_spec.o 00:06:24.528 CXX test/cpp_headers/vmd.o 00:06:24.528 CXX test/cpp_headers/xor.o 00:06:24.528 CC app/fio/nvme/fio_plugin.o 00:06:24.528 CXX test/cpp_headers/zipf.o 00:06:24.528 CC app/fio/bdev/fio_plugin.o 00:06:24.528 CC test/app/bdev_svc/bdev_svc.o 00:06:24.528 CC test/dma/test_dma/test_dma.o 00:06:24.528 LINK spdk_nvme_discover 00:06:24.528 LINK rpc_client_test 00:06:24.528 LINK spdk_trace_record 00:06:24.800 LINK nvmf_tgt 00:06:24.800 LINK interrupt_tgt 00:06:24.800 LINK iscsi_tgt 00:06:25.062 LINK spdk_tgt 00:06:25.062 CC test/env/mem_callbacks/mem_callbacks.o 00:06:25.062 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:25.062 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:25.062 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:25.062 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:25.321 LINK spdk_trace 00:06:25.321 LINK zipf 00:06:25.321 LINK spdk_dd 00:06:25.321 LINK histogram_perf 00:06:25.321 LINK vtophys 00:06:25.321 LINK ioat_perf 00:06:25.321 LINK jsoncat 00:06:25.321 LINK poller_perf 00:06:25.321 LINK env_dpdk_post_init 00:06:25.581 LINK verify 00:06:25.581 LINK bdev_svc 00:06:25.581 LINK stub 00:06:25.581 LINK spdk_nvme_perf 00:06:25.581 LINK nvme_fuzz 00:06:25.581 LINK pci_ut 00:06:25.581 CC app/vhost/vhost.o 00:06:25.581 LINK vhost_fuzz 00:06:25.842 LINK spdk_bdev 00:06:25.842 LINK spdk_nvme_identify 00:06:25.842 LINK spdk_nvme 00:06:25.842 CC examples/idxd/perf/perf.o 00:06:25.842 LINK test_dma 00:06:25.842 CC examples/sock/hello_world/hello_sock.o 00:06:25.842 CC examples/vmd/led/led.o 00:06:25.842 CC examples/vmd/lsvmd/lsvmd.o 00:06:25.842 CC examples/thread/thread/thread_ex.o 00:06:25.842 LINK mem_callbacks 00:06:25.842 CC test/event/reactor_perf/reactor_perf.o 00:06:25.842 LINK spdk_top 00:06:25.842 CC test/event/reactor/reactor.o 00:06:25.842 CC test/event/app_repeat/app_repeat.o 00:06:25.842 CC test/event/event_perf/event_perf.o 00:06:25.842 LINK vhost 00:06:25.842 CC test/event/scheduler/scheduler.o 00:06:26.104 LINK lsvmd 00:06:26.104 LINK led 00:06:26.104 LINK reactor_perf 00:06:26.104 LINK hello_sock 00:06:26.104 LINK reactor 00:06:26.104 LINK app_repeat 00:06:26.104 LINK event_perf 00:06:26.104 LINK thread 00:06:26.104 LINK idxd_perf 00:06:26.363 LINK scheduler 00:06:26.363 LINK memory_ut 00:06:26.363 CC test/nvme/reset/reset.o 00:06:26.363 CC test/nvme/aer/aer.o 00:06:26.363 CC test/nvme/sgl/sgl.o 00:06:26.363 CC test/nvme/startup/startup.o 00:06:26.364 CC test/nvme/boot_partition/boot_partition.o 00:06:26.364 CC test/nvme/err_injection/err_injection.o 00:06:26.364 CC test/nvme/overhead/overhead.o 00:06:26.364 CC test/nvme/simple_copy/simple_copy.o 00:06:26.364 CC test/nvme/connect_stress/connect_stress.o 00:06:26.364 CC test/nvme/e2edp/nvme_dp.o 00:06:26.364 CC test/nvme/fused_ordering/fused_ordering.o 00:06:26.364 CC test/nvme/cuse/cuse.o 00:06:26.364 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:26.364 CC test/nvme/compliance/nvme_compliance.o 00:06:26.364 CC test/nvme/fdp/fdp.o 00:06:26.364 CC test/nvme/reserve/reserve.o 00:06:26.364 CC test/blobfs/mkfs/mkfs.o 00:06:26.364 CC test/accel/dif/dif.o 00:06:26.624 CC test/lvol/esnap/esnap.o 00:06:26.624 CC examples/nvme/arbitration/arbitration.o 00:06:26.624 LINK boot_partition 00:06:26.624 LINK startup 00:06:26.624 CC examples/nvme/reconnect/reconnect.o 00:06:26.624 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:26.624 CC examples/nvme/hello_world/hello_world.o 00:06:26.624 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:26.624 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:26.624 CC examples/nvme/hotplug/hotplug.o 00:06:26.624 LINK connect_stress 00:06:26.624 LINK err_injection 00:06:26.624 CC examples/nvme/abort/abort.o 00:06:26.624 LINK doorbell_aers 00:06:26.624 LINK fused_ordering 00:06:26.624 LINK simple_copy 00:06:26.624 LINK reserve 00:06:26.624 LINK reset 00:06:26.624 LINK sgl 00:06:26.886 LINK mkfs 00:06:26.886 LINK iscsi_fuzz 00:06:26.886 LINK aer 00:06:26.886 CC examples/accel/perf/accel_perf.o 00:06:26.886 LINK nvme_dp 00:06:26.886 LINK overhead 00:06:26.886 LINK nvme_compliance 00:06:26.886 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:26.886 CC examples/blob/cli/blobcli.o 00:06:26.886 LINK fdp 00:06:26.886 CC examples/blob/hello_world/hello_blob.o 00:06:26.886 LINK pmr_persistence 00:06:26.886 LINK cmb_copy 00:06:26.886 LINK hello_world 00:06:26.886 LINK hotplug 00:06:27.147 LINK arbitration 00:06:27.147 LINK reconnect 00:06:27.147 LINK abort 00:06:27.147 LINK dif 00:06:27.147 LINK hello_blob 00:06:27.147 LINK hello_fsdev 00:06:27.147 LINK nvme_manage 00:06:27.147 LINK accel_perf 00:06:27.409 LINK blobcli 00:06:27.673 LINK cuse 00:06:27.673 CC test/bdev/bdevio/bdevio.o 00:06:27.933 CC examples/bdev/hello_world/hello_bdev.o 00:06:27.933 CC examples/bdev/bdevperf/bdevperf.o 00:06:28.194 LINK bdevio 00:06:28.194 LINK hello_bdev 00:06:28.769 LINK bdevperf 00:06:29.342 CC examples/nvmf/nvmf/nvmf.o 00:06:29.603 LINK nvmf 00:06:31.517 LINK esnap 00:06:31.517 00:06:31.517 real 0m55.606s 00:06:31.517 user 8m3.373s 00:06:31.517 sys 5m22.430s 00:06:31.517 06:16:51 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:31.517 06:16:51 make -- common/autotest_common.sh@10 -- $ set +x 00:06:31.517 ************************************ 00:06:31.517 END TEST make 00:06:31.517 ************************************ 00:06:31.517 06:16:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:31.517 06:16:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:31.517 06:16:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:31.517 06:16:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.517 06:16:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:31.517 06:16:51 -- pm/common@44 -- $ pid=2508810 00:06:31.517 06:16:51 -- pm/common@50 -- $ kill -TERM 2508810 00:06:31.517 06:16:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.517 06:16:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:31.517 06:16:51 -- pm/common@44 -- $ pid=2508811 00:06:31.517 06:16:51 -- pm/common@50 -- $ kill -TERM 2508811 00:06:31.517 06:16:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.517 06:16:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:31.517 06:16:51 -- pm/common@44 -- $ pid=2508813 00:06:31.517 06:16:51 -- pm/common@50 -- $ kill -TERM 2508813 00:06:31.517 06:16:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.517 06:16:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:31.517 06:16:51 -- pm/common@44 -- $ pid=2508836 00:06:31.517 06:16:51 -- pm/common@50 -- $ sudo -E kill -TERM 2508836 00:06:31.517 06:16:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:31.518 06:16:51 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:31.779 06:16:51 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:31.779 06:16:51 -- common/autotest_common.sh@1691 -- # lcov --version 00:06:31.779 06:16:51 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:31.779 06:16:51 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:31.779 06:16:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.779 06:16:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.779 06:16:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.779 06:16:51 -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.779 06:16:51 -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.779 06:16:51 -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.779 06:16:51 -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.779 06:16:51 -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.779 06:16:51 -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.779 06:16:51 -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.779 06:16:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.779 06:16:51 -- scripts/common.sh@344 -- # case "$op" in 00:06:31.779 06:16:51 -- scripts/common.sh@345 -- # : 1 00:06:31.779 06:16:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.779 06:16:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.779 06:16:51 -- scripts/common.sh@365 -- # decimal 1 00:06:31.779 06:16:51 -- scripts/common.sh@353 -- # local d=1 00:06:31.779 06:16:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.779 06:16:51 -- scripts/common.sh@355 -- # echo 1 00:06:31.779 06:16:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.779 06:16:51 -- scripts/common.sh@366 -- # decimal 2 00:06:31.779 06:16:51 -- scripts/common.sh@353 -- # local d=2 00:06:31.779 06:16:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.779 06:16:51 -- scripts/common.sh@355 -- # echo 2 00:06:31.779 06:16:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.779 06:16:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.779 06:16:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.779 06:16:51 -- scripts/common.sh@368 -- # return 0 00:06:31.779 06:16:51 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.779 06:16:51 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:31.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.779 --rc genhtml_branch_coverage=1 00:06:31.779 --rc genhtml_function_coverage=1 00:06:31.779 --rc genhtml_legend=1 00:06:31.779 --rc geninfo_all_blocks=1 00:06:31.779 --rc geninfo_unexecuted_blocks=1 00:06:31.779 00:06:31.779 ' 00:06:31.779 06:16:51 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:31.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.779 --rc genhtml_branch_coverage=1 00:06:31.779 --rc genhtml_function_coverage=1 00:06:31.779 --rc genhtml_legend=1 00:06:31.779 --rc geninfo_all_blocks=1 00:06:31.779 --rc geninfo_unexecuted_blocks=1 00:06:31.779 00:06:31.779 ' 00:06:31.779 06:16:51 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:31.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.779 --rc genhtml_branch_coverage=1 00:06:31.779 --rc genhtml_function_coverage=1 00:06:31.779 --rc genhtml_legend=1 00:06:31.779 --rc geninfo_all_blocks=1 00:06:31.779 --rc geninfo_unexecuted_blocks=1 00:06:31.779 00:06:31.779 ' 00:06:31.779 06:16:51 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:31.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.779 --rc genhtml_branch_coverage=1 00:06:31.779 --rc genhtml_function_coverage=1 00:06:31.779 --rc genhtml_legend=1 00:06:31.779 --rc geninfo_all_blocks=1 00:06:31.779 --rc geninfo_unexecuted_blocks=1 00:06:31.779 00:06:31.779 ' 00:06:31.779 06:16:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.779 06:16:51 -- nvmf/common.sh@7 -- # uname -s 00:06:31.779 06:16:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.779 06:16:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.779 06:16:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.779 06:16:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.779 06:16:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.779 06:16:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.779 06:16:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.779 06:16:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.779 06:16:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.779 06:16:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.779 06:16:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:31.779 06:16:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:31.779 06:16:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.779 06:16:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.779 06:16:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.779 06:16:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.780 06:16:51 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.780 06:16:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.780 06:16:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.780 06:16:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.780 06:16:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.780 06:16:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.780 06:16:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.780 06:16:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.780 06:16:51 -- paths/export.sh@5 -- # export PATH 00:06:31.780 06:16:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.780 06:16:51 -- nvmf/common.sh@51 -- # : 0 00:06:31.780 06:16:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.780 06:16:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.780 06:16:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.780 06:16:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.780 06:16:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.780 06:16:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.780 06:16:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.780 06:16:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.780 06:16:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.780 06:16:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:31.780 06:16:51 -- spdk/autotest.sh@32 -- # uname -s 00:06:31.780 06:16:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:31.780 06:16:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:31.780 06:16:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:31.780 06:16:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:31.780 06:16:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:31.780 06:16:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:31.780 06:16:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:31.780 06:16:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:31.780 06:16:51 -- spdk/autotest.sh@48 -- # udevadm_pid=2574327 00:06:31.780 06:16:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:31.780 06:16:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:31.780 06:16:51 -- pm/common@17 -- # local monitor 00:06:31.780 06:16:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.780 06:16:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.780 06:16:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.780 06:16:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:31.780 06:16:51 -- pm/common@21 -- # date +%s 00:06:31.780 06:16:51 -- pm/common@25 -- # sleep 1 00:06:31.780 06:16:51 -- pm/common@21 -- # date +%s 00:06:31.780 06:16:51 -- pm/common@21 -- # date +%s 00:06:31.780 06:16:51 -- pm/common@21 -- # date +%s 00:06:31.780 06:16:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079811 00:06:31.780 06:16:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079811 00:06:31.780 06:16:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079811 00:06:31.780 06:16:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079811 00:06:31.780 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079811_collect-cpu-load.pm.log 00:06:31.780 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079811_collect-vmstat.pm.log 00:06:31.780 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079811_collect-cpu-temp.pm.log 00:06:32.041 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079811_collect-bmc-pm.bmc.pm.log 00:06:33.115 06:16:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:33.115 06:16:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:33.115 06:16:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.115 06:16:52 -- common/autotest_common.sh@10 -- # set +x 00:06:33.115 06:16:53 -- spdk/autotest.sh@59 -- # create_test_list 00:06:33.115 06:16:53 -- common/autotest_common.sh@750 -- # xtrace_disable 00:06:33.115 06:16:53 -- common/autotest_common.sh@10 -- # set +x 00:06:33.115 06:16:53 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:33.115 06:16:53 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.115 06:16:53 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.115 06:16:53 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:33.115 06:16:53 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.115 06:16:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:33.115 06:16:53 -- common/autotest_common.sh@1455 -- # uname 00:06:33.115 06:16:53 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:33.115 06:16:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:33.115 06:16:53 -- common/autotest_common.sh@1475 -- # uname 00:06:33.115 06:16:53 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:33.115 06:16:53 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:33.115 06:16:53 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:33.115 lcov: LCOV version 1.15 00:06:33.115 06:16:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:48.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:48.028 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:06.149 06:17:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:06.149 06:17:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.149 06:17:23 -- common/autotest_common.sh@10 -- # set +x 00:07:06.149 06:17:23 -- spdk/autotest.sh@78 -- # rm -f 00:07:06.149 06:17:23 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:06.719 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:07:06.719 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:07:06.719 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:07:06.719 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:07:06.719 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:07:06.719 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:07:06.719 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:07:06.719 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:07:06.978 0000:65:00.0 (144d a80a): Already using the nvme driver 00:07:06.978 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:07:06.978 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:07:06.978 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:07:06.978 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:07:06.978 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:07:06.978 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:07:06.979 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:07:06.979 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:07:07.278 06:17:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:07.278 06:17:27 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:07.278 06:17:27 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:07.278 06:17:27 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:07.278 06:17:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:07.278 06:17:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:07.278 06:17:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:07.278 06:17:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:07.278 06:17:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:07.278 06:17:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:07.278 06:17:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:07.278 06:17:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:07.278 06:17:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:07.278 06:17:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:07.278 06:17:27 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:07.278 No valid GPT data, bailing 00:07:07.278 06:17:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:07.278 06:17:27 -- scripts/common.sh@394 -- # pt= 00:07:07.278 06:17:27 -- scripts/common.sh@395 -- # return 1 00:07:07.278 06:17:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:07.278 1+0 records in 00:07:07.278 1+0 records out 00:07:07.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00196651 s, 533 MB/s 00:07:07.278 06:17:27 -- spdk/autotest.sh@105 -- # sync 00:07:07.278 06:17:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:07.538 06:17:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:07.538 06:17:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:17.537 06:17:36 -- spdk/autotest.sh@111 -- # uname -s 00:07:17.537 06:17:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:17.537 06:17:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:17.537 06:17:36 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:19.450 Hugepages 00:07:19.450 node hugesize free / total 00:07:19.450 node0 1048576kB 0 / 0 00:07:19.450 node0 2048kB 0 / 0 00:07:19.450 node1 1048576kB 0 / 0 00:07:19.450 node1 2048kB 0 / 0 00:07:19.450 00:07:19.450 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:19.450 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:07:19.450 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:07:19.450 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:07:19.450 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:07:19.450 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:07:19.450 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:07:19.450 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:07:19.450 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:07:19.711 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:07:19.711 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:07:19.711 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:07:19.711 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:07:19.711 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:07:19.711 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:07:19.711 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:07:19.711 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:07:19.711 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:07:19.711 06:17:39 -- spdk/autotest.sh@117 -- # uname -s 00:07:19.711 06:17:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:19.711 06:17:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:19.711 06:17:39 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:23.013 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:23.013 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:23.013 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:23.013 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:23.274 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:23.274 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:23.274 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:23.274 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:23.275 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:23.275 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:23.275 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:23.275 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:23.275 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:23.275 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:23.275 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:23.275 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:25.189 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:07:25.450 06:17:45 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:26.392 06:17:46 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:26.392 06:17:46 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:26.392 06:17:46 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:26.392 06:17:46 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:26.392 06:17:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:26.392 06:17:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:26.392 06:17:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:26.392 06:17:46 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:26.392 06:17:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:26.392 06:17:46 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:26.393 06:17:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:07:26.393 06:17:46 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:30.599 Waiting for block devices as requested 00:07:30.599 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:07:30.599 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:07:30.599 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:07:30.599 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:07:30.599 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:07:30.599 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:07:30.599 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:07:30.599 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:07:30.599 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:07:30.860 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:07:30.860 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:07:30.860 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:07:31.120 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:07:31.120 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:07:31.120 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:07:31.381 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:07:31.381 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:07:31.642 06:17:51 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:31.642 06:17:51 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:07:31.642 06:17:51 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:07:31.642 06:17:51 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:07:31.642 06:17:51 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:07:31.642 06:17:51 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:07:31.642 06:17:51 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:07:31.642 06:17:51 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:31.642 06:17:51 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:31.642 06:17:51 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:31.642 06:17:51 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:31.642 06:17:51 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:31.642 06:17:51 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:31.642 06:17:51 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:07:31.642 06:17:51 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:31.642 06:17:51 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:31.642 06:17:51 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:31.642 06:17:51 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:31.642 06:17:51 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:31.642 06:17:51 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:31.642 06:17:51 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:31.642 06:17:51 -- common/autotest_common.sh@1541 -- # continue 00:07:31.642 06:17:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:31.642 06:17:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:31.642 06:17:51 -- common/autotest_common.sh@10 -- # set +x 00:07:31.642 06:17:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:31.642 06:17:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.642 06:17:51 -- common/autotest_common.sh@10 -- # set +x 00:07:31.642 06:17:51 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:35.848 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:07:35.848 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:07:35.848 06:17:55 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:35.848 06:17:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:35.848 06:17:55 -- common/autotest_common.sh@10 -- # set +x 00:07:35.848 06:17:55 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:35.848 06:17:55 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:35.848 06:17:55 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:35.848 06:17:55 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:35.848 06:17:55 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:35.848 06:17:55 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:35.848 06:17:55 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:35.848 06:17:55 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:35.848 06:17:55 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:35.848 06:17:55 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:35.848 06:17:55 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:35.848 06:17:55 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:35.848 06:17:55 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:35.848 06:17:56 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:35.848 06:17:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:07:35.848 06:17:56 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:35.848 06:17:56 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:07:35.848 06:17:56 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:07:35.848 06:17:56 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:07:35.848 06:17:56 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:07:35.848 06:17:56 -- common/autotest_common.sh@1570 -- # return 0 00:07:35.848 06:17:56 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:07:35.848 06:17:56 -- common/autotest_common.sh@1578 -- # return 0 00:07:35.848 06:17:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:35.848 06:17:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:35.848 06:17:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:35.848 06:17:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:35.848 06:17:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:35.848 06:17:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.848 06:17:56 -- common/autotest_common.sh@10 -- # set +x 00:07:35.848 06:17:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:35.848 06:17:56 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:35.848 06:17:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.848 06:17:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.848 06:17:56 -- common/autotest_common.sh@10 -- # set +x 00:07:36.109 ************************************ 00:07:36.109 START TEST env 00:07:36.109 ************************************ 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:36.109 * Looking for test storage... 00:07:36.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1691 -- # lcov --version 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:36.109 06:17:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.109 06:17:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.109 06:17:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.109 06:17:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.109 06:17:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.109 06:17:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.109 06:17:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.109 06:17:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.109 06:17:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.109 06:17:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.109 06:17:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.109 06:17:56 env -- scripts/common.sh@344 -- # case "$op" in 00:07:36.109 06:17:56 env -- scripts/common.sh@345 -- # : 1 00:07:36.109 06:17:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.109 06:17:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.109 06:17:56 env -- scripts/common.sh@365 -- # decimal 1 00:07:36.109 06:17:56 env -- scripts/common.sh@353 -- # local d=1 00:07:36.109 06:17:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.109 06:17:56 env -- scripts/common.sh@355 -- # echo 1 00:07:36.109 06:17:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.109 06:17:56 env -- scripts/common.sh@366 -- # decimal 2 00:07:36.109 06:17:56 env -- scripts/common.sh@353 -- # local d=2 00:07:36.109 06:17:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.109 06:17:56 env -- scripts/common.sh@355 -- # echo 2 00:07:36.109 06:17:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.109 06:17:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.109 06:17:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.109 06:17:56 env -- scripts/common.sh@368 -- # return 0 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:36.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.109 --rc genhtml_branch_coverage=1 00:07:36.109 --rc genhtml_function_coverage=1 00:07:36.109 --rc genhtml_legend=1 00:07:36.109 --rc geninfo_all_blocks=1 00:07:36.109 --rc geninfo_unexecuted_blocks=1 00:07:36.109 00:07:36.109 ' 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:36.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.109 --rc genhtml_branch_coverage=1 00:07:36.109 --rc genhtml_function_coverage=1 00:07:36.109 --rc genhtml_legend=1 00:07:36.109 --rc geninfo_all_blocks=1 00:07:36.109 --rc geninfo_unexecuted_blocks=1 00:07:36.109 00:07:36.109 ' 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:36.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.109 --rc genhtml_branch_coverage=1 00:07:36.109 --rc genhtml_function_coverage=1 00:07:36.109 --rc genhtml_legend=1 00:07:36.109 --rc geninfo_all_blocks=1 00:07:36.109 --rc geninfo_unexecuted_blocks=1 00:07:36.109 00:07:36.109 ' 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:36.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.109 --rc genhtml_branch_coverage=1 00:07:36.109 --rc genhtml_function_coverage=1 00:07:36.109 --rc genhtml_legend=1 00:07:36.109 --rc geninfo_all_blocks=1 00:07:36.109 --rc geninfo_unexecuted_blocks=1 00:07:36.109 00:07:36.109 ' 00:07:36.109 06:17:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:36.109 06:17:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:36.109 06:17:56 env -- common/autotest_common.sh@10 -- # set +x 00:07:36.370 ************************************ 00:07:36.370 START TEST env_memory 00:07:36.370 ************************************ 00:07:36.370 06:17:56 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:36.370 00:07:36.370 00:07:36.370 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.370 http://cunit.sourceforge.net/ 00:07:36.370 00:07:36.370 00:07:36.370 Suite: memory 00:07:36.370 Test: alloc and free memory map ...[2024-11-20 06:17:56.450516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:36.370 passed 00:07:36.370 Test: mem map translation ...[2024-11-20 06:17:56.476299] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:36.370 [2024-11-20 06:17:56.476330] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:36.370 [2024-11-20 06:17:56.476378] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:36.370 [2024-11-20 06:17:56.476386] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:36.370 passed 00:07:36.370 Test: mem map registration ...[2024-11-20 06:17:56.531779] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:36.370 [2024-11-20 06:17:56.531819] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:36.370 passed 00:07:36.370 Test: mem map adjacent registrations ...passed 00:07:36.370 00:07:36.370 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.370 suites 1 1 n/a 0 0 00:07:36.370 tests 4 4 4 0 0 00:07:36.370 asserts 152 152 152 0 n/a 00:07:36.370 00:07:36.370 Elapsed time = 0.194 seconds 00:07:36.370 00:07:36.370 real 0m0.209s 00:07:36.370 user 0m0.196s 00:07:36.370 sys 0m0.012s 00:07:36.370 06:17:56 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:36.370 06:17:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:36.370 ************************************ 00:07:36.370 END TEST env_memory 00:07:36.370 ************************************ 00:07:36.370 06:17:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:36.370 06:17:56 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:36.370 06:17:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:36.370 06:17:56 env -- common/autotest_common.sh@10 -- # set +x 00:07:36.630 ************************************ 00:07:36.630 START TEST env_vtophys 00:07:36.630 ************************************ 00:07:36.630 06:17:56 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:36.630 EAL: lib.eal log level changed from notice to debug 00:07:36.630 EAL: Detected lcore 0 as core 0 on socket 0 00:07:36.630 EAL: Detected lcore 1 as core 1 on socket 0 00:07:36.630 EAL: Detected lcore 2 as core 2 on socket 0 00:07:36.630 EAL: Detected lcore 3 as core 3 on socket 0 00:07:36.630 EAL: Detected lcore 4 as core 4 on socket 0 00:07:36.630 EAL: Detected lcore 5 as core 5 on socket 0 00:07:36.630 EAL: Detected lcore 6 as core 6 on socket 0 00:07:36.630 EAL: Detected lcore 7 as core 7 on socket 0 00:07:36.630 EAL: Detected lcore 8 as core 8 on socket 0 00:07:36.631 EAL: Detected lcore 9 as core 9 on socket 0 00:07:36.631 EAL: Detected lcore 10 as core 10 on socket 0 00:07:36.631 EAL: Detected lcore 11 as core 11 on socket 0 00:07:36.631 EAL: Detected lcore 12 as core 12 on socket 0 00:07:36.631 EAL: Detected lcore 13 as core 13 on socket 0 00:07:36.631 EAL: Detected lcore 14 as core 14 on socket 0 00:07:36.631 EAL: Detected lcore 15 as core 15 on socket 0 00:07:36.631 EAL: Detected lcore 16 as core 16 on socket 0 00:07:36.631 EAL: Detected lcore 17 as core 17 on socket 0 00:07:36.631 EAL: Detected lcore 18 as core 18 on socket 0 00:07:36.631 EAL: Detected lcore 19 as core 19 on socket 0 00:07:36.631 EAL: Detected lcore 20 as core 20 on socket 0 00:07:36.631 EAL: Detected lcore 21 as core 21 on socket 0 00:07:36.631 EAL: Detected lcore 22 as core 22 on socket 0 00:07:36.631 EAL: Detected lcore 23 as core 23 on socket 0 00:07:36.631 EAL: Detected lcore 24 as core 24 on socket 0 00:07:36.631 EAL: Detected lcore 25 as core 25 on socket 0 00:07:36.631 EAL: Detected lcore 26 as core 26 on socket 0 00:07:36.631 EAL: Detected lcore 27 as core 27 on socket 0 00:07:36.631 EAL: Detected lcore 28 as core 28 on socket 0 00:07:36.631 EAL: Detected lcore 29 as core 29 on socket 0 00:07:36.631 EAL: Detected lcore 30 as core 30 on socket 0 00:07:36.631 EAL: Detected lcore 31 as core 31 on socket 0 00:07:36.631 EAL: Detected lcore 32 as core 32 on socket 0 00:07:36.631 EAL: Detected lcore 33 as core 33 on socket 0 00:07:36.631 EAL: Detected lcore 34 as core 34 on socket 0 00:07:36.631 EAL: Detected lcore 35 as core 35 on socket 0 00:07:36.631 EAL: Detected lcore 36 as core 0 on socket 1 00:07:36.631 EAL: Detected lcore 37 as core 1 on socket 1 00:07:36.631 EAL: Detected lcore 38 as core 2 on socket 1 00:07:36.631 EAL: Detected lcore 39 as core 3 on socket 1 00:07:36.631 EAL: Detected lcore 40 as core 4 on socket 1 00:07:36.631 EAL: Detected lcore 41 as core 5 on socket 1 00:07:36.631 EAL: Detected lcore 42 as core 6 on socket 1 00:07:36.631 EAL: Detected lcore 43 as core 7 on socket 1 00:07:36.631 EAL: Detected lcore 44 as core 8 on socket 1 00:07:36.631 EAL: Detected lcore 45 as core 9 on socket 1 00:07:36.631 EAL: Detected lcore 46 as core 10 on socket 1 00:07:36.631 EAL: Detected lcore 47 as core 11 on socket 1 00:07:36.631 EAL: Detected lcore 48 as core 12 on socket 1 00:07:36.631 EAL: Detected lcore 49 as core 13 on socket 1 00:07:36.631 EAL: Detected lcore 50 as core 14 on socket 1 00:07:36.631 EAL: Detected lcore 51 as core 15 on socket 1 00:07:36.631 EAL: Detected lcore 52 as core 16 on socket 1 00:07:36.631 EAL: Detected lcore 53 as core 17 on socket 1 00:07:36.631 EAL: Detected lcore 54 as core 18 on socket 1 00:07:36.631 EAL: Detected lcore 55 as core 19 on socket 1 00:07:36.631 EAL: Detected lcore 56 as core 20 on socket 1 00:07:36.631 EAL: Detected lcore 57 as core 21 on socket 1 00:07:36.631 EAL: Detected lcore 58 as core 22 on socket 1 00:07:36.631 EAL: Detected lcore 59 as core 23 on socket 1 00:07:36.631 EAL: Detected lcore 60 as core 24 on socket 1 00:07:36.631 EAL: Detected lcore 61 as core 25 on socket 1 00:07:36.631 EAL: Detected lcore 62 as core 26 on socket 1 00:07:36.631 EAL: Detected lcore 63 as core 27 on socket 1 00:07:36.631 EAL: Detected lcore 64 as core 28 on socket 1 00:07:36.631 EAL: Detected lcore 65 as core 29 on socket 1 00:07:36.631 EAL: Detected lcore 66 as core 30 on socket 1 00:07:36.631 EAL: Detected lcore 67 as core 31 on socket 1 00:07:36.631 EAL: Detected lcore 68 as core 32 on socket 1 00:07:36.631 EAL: Detected lcore 69 as core 33 on socket 1 00:07:36.631 EAL: Detected lcore 70 as core 34 on socket 1 00:07:36.631 EAL: Detected lcore 71 as core 35 on socket 1 00:07:36.631 EAL: Detected lcore 72 as core 0 on socket 0 00:07:36.631 EAL: Detected lcore 73 as core 1 on socket 0 00:07:36.631 EAL: Detected lcore 74 as core 2 on socket 0 00:07:36.631 EAL: Detected lcore 75 as core 3 on socket 0 00:07:36.631 EAL: Detected lcore 76 as core 4 on socket 0 00:07:36.631 EAL: Detected lcore 77 as core 5 on socket 0 00:07:36.631 EAL: Detected lcore 78 as core 6 on socket 0 00:07:36.631 EAL: Detected lcore 79 as core 7 on socket 0 00:07:36.631 EAL: Detected lcore 80 as core 8 on socket 0 00:07:36.631 EAL: Detected lcore 81 as core 9 on socket 0 00:07:36.631 EAL: Detected lcore 82 as core 10 on socket 0 00:07:36.631 EAL: Detected lcore 83 as core 11 on socket 0 00:07:36.631 EAL: Detected lcore 84 as core 12 on socket 0 00:07:36.631 EAL: Detected lcore 85 as core 13 on socket 0 00:07:36.631 EAL: Detected lcore 86 as core 14 on socket 0 00:07:36.631 EAL: Detected lcore 87 as core 15 on socket 0 00:07:36.631 EAL: Detected lcore 88 as core 16 on socket 0 00:07:36.631 EAL: Detected lcore 89 as core 17 on socket 0 00:07:36.631 EAL: Detected lcore 90 as core 18 on socket 0 00:07:36.631 EAL: Detected lcore 91 as core 19 on socket 0 00:07:36.631 EAL: Detected lcore 92 as core 20 on socket 0 00:07:36.631 EAL: Detected lcore 93 as core 21 on socket 0 00:07:36.631 EAL: Detected lcore 94 as core 22 on socket 0 00:07:36.631 EAL: Detected lcore 95 as core 23 on socket 0 00:07:36.631 EAL: Detected lcore 96 as core 24 on socket 0 00:07:36.631 EAL: Detected lcore 97 as core 25 on socket 0 00:07:36.631 EAL: Detected lcore 98 as core 26 on socket 0 00:07:36.631 EAL: Detected lcore 99 as core 27 on socket 0 00:07:36.631 EAL: Detected lcore 100 as core 28 on socket 0 00:07:36.631 EAL: Detected lcore 101 as core 29 on socket 0 00:07:36.631 EAL: Detected lcore 102 as core 30 on socket 0 00:07:36.631 EAL: Detected lcore 103 as core 31 on socket 0 00:07:36.631 EAL: Detected lcore 104 as core 32 on socket 0 00:07:36.631 EAL: Detected lcore 105 as core 33 on socket 0 00:07:36.631 EAL: Detected lcore 106 as core 34 on socket 0 00:07:36.631 EAL: Detected lcore 107 as core 35 on socket 0 00:07:36.631 EAL: Detected lcore 108 as core 0 on socket 1 00:07:36.631 EAL: Detected lcore 109 as core 1 on socket 1 00:07:36.631 EAL: Detected lcore 110 as core 2 on socket 1 00:07:36.631 EAL: Detected lcore 111 as core 3 on socket 1 00:07:36.631 EAL: Detected lcore 112 as core 4 on socket 1 00:07:36.631 EAL: Detected lcore 113 as core 5 on socket 1 00:07:36.631 EAL: Detected lcore 114 as core 6 on socket 1 00:07:36.631 EAL: Detected lcore 115 as core 7 on socket 1 00:07:36.631 EAL: Detected lcore 116 as core 8 on socket 1 00:07:36.631 EAL: Detected lcore 117 as core 9 on socket 1 00:07:36.631 EAL: Detected lcore 118 as core 10 on socket 1 00:07:36.631 EAL: Detected lcore 119 as core 11 on socket 1 00:07:36.631 EAL: Detected lcore 120 as core 12 on socket 1 00:07:36.631 EAL: Detected lcore 121 as core 13 on socket 1 00:07:36.631 EAL: Detected lcore 122 as core 14 on socket 1 00:07:36.631 EAL: Detected lcore 123 as core 15 on socket 1 00:07:36.631 EAL: Detected lcore 124 as core 16 on socket 1 00:07:36.631 EAL: Detected lcore 125 as core 17 on socket 1 00:07:36.631 EAL: Detected lcore 126 as core 18 on socket 1 00:07:36.631 EAL: Detected lcore 127 as core 19 on socket 1 00:07:36.631 EAL: Skipped lcore 128 as core 20 on socket 1 00:07:36.631 EAL: Skipped lcore 129 as core 21 on socket 1 00:07:36.631 EAL: Skipped lcore 130 as core 22 on socket 1 00:07:36.631 EAL: Skipped lcore 131 as core 23 on socket 1 00:07:36.631 EAL: Skipped lcore 132 as core 24 on socket 1 00:07:36.631 EAL: Skipped lcore 133 as core 25 on socket 1 00:07:36.631 EAL: Skipped lcore 134 as core 26 on socket 1 00:07:36.631 EAL: Skipped lcore 135 as core 27 on socket 1 00:07:36.631 EAL: Skipped lcore 136 as core 28 on socket 1 00:07:36.631 EAL: Skipped lcore 137 as core 29 on socket 1 00:07:36.631 EAL: Skipped lcore 138 as core 30 on socket 1 00:07:36.631 EAL: Skipped lcore 139 as core 31 on socket 1 00:07:36.631 EAL: Skipped lcore 140 as core 32 on socket 1 00:07:36.631 EAL: Skipped lcore 141 as core 33 on socket 1 00:07:36.631 EAL: Skipped lcore 142 as core 34 on socket 1 00:07:36.631 EAL: Skipped lcore 143 as core 35 on socket 1 00:07:36.631 EAL: Maximum logical cores by configuration: 128 00:07:36.631 EAL: Detected CPU lcores: 128 00:07:36.631 EAL: Detected NUMA nodes: 2 00:07:36.631 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:36.631 EAL: Detected shared linkage of DPDK 00:07:36.631 EAL: No shared files mode enabled, IPC will be disabled 00:07:36.631 EAL: Bus pci wants IOVA as 'DC' 00:07:36.631 EAL: Buses did not request a specific IOVA mode. 00:07:36.631 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:36.631 EAL: Selected IOVA mode 'VA' 00:07:36.631 EAL: Probing VFIO support... 00:07:36.631 EAL: IOMMU type 1 (Type 1) is supported 00:07:36.631 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:36.631 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:36.631 EAL: VFIO support initialized 00:07:36.631 EAL: Ask a virtual area of 0x2e000 bytes 00:07:36.631 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:36.631 EAL: Setting up physically contiguous memory... 00:07:36.631 EAL: Setting maximum number of open files to 524288 00:07:36.631 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:36.631 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:36.631 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:36.631 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.631 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:36.631 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:36.631 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.631 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:36.631 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:36.631 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.631 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:36.631 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:36.631 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.631 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:36.631 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:36.631 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.631 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:36.631 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:36.631 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.631 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:36.631 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:36.631 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.631 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:36.631 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:36.631 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.631 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:36.631 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:36.631 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:36.631 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.631 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:36.632 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:36.632 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.632 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:36.632 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:36.632 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.632 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:36.632 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:36.632 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.632 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:36.632 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:36.632 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.632 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:36.632 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:36.632 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.632 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:36.632 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:36.632 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.632 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:36.632 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:36.632 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.632 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:36.632 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:36.632 EAL: Hugepages will be freed exactly as allocated. 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: TSC frequency is ~2400000 KHz 00:07:36.632 EAL: Main lcore 0 is ready (tid=7f44aa0dea00;cpuset=[0]) 00:07:36.632 EAL: Trying to obtain current memory policy. 00:07:36.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:36.632 EAL: Restoring previous memory policy: 0 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was expanded by 2MB 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:36.632 EAL: Mem event callback 'spdk:(nil)' registered 00:07:36.632 00:07:36.632 00:07:36.632 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.632 http://cunit.sourceforge.net/ 00:07:36.632 00:07:36.632 00:07:36.632 Suite: components_suite 00:07:36.632 Test: vtophys_malloc_test ...passed 00:07:36.632 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:36.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:36.632 EAL: Restoring previous memory policy: 4 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was expanded by 4MB 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was shrunk by 4MB 00:07:36.632 EAL: Trying to obtain current memory policy. 00:07:36.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:36.632 EAL: Restoring previous memory policy: 4 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was expanded by 6MB 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was shrunk by 6MB 00:07:36.632 EAL: Trying to obtain current memory policy. 00:07:36.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:36.632 EAL: Restoring previous memory policy: 4 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was expanded by 10MB 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was shrunk by 10MB 00:07:36.632 EAL: Trying to obtain current memory policy. 00:07:36.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:36.632 EAL: Restoring previous memory policy: 4 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was expanded by 18MB 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was shrunk by 18MB 00:07:36.632 EAL: Trying to obtain current memory policy. 00:07:36.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:36.632 EAL: Restoring previous memory policy: 4 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was expanded by 34MB 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was shrunk by 34MB 00:07:36.632 EAL: Trying to obtain current memory policy. 00:07:36.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:36.632 EAL: Restoring previous memory policy: 4 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was expanded by 66MB 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was shrunk by 66MB 00:07:36.632 EAL: Trying to obtain current memory policy. 00:07:36.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:36.632 EAL: Restoring previous memory policy: 4 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was expanded by 130MB 00:07:36.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.632 EAL: request: mp_malloc_sync 00:07:36.632 EAL: No shared files mode enabled, IPC is disabled 00:07:36.632 EAL: Heap on socket 0 was shrunk by 130MB 00:07:36.632 EAL: Trying to obtain current memory policy. 00:07:36.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:36.892 EAL: Restoring previous memory policy: 4 00:07:36.892 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.892 EAL: request: mp_malloc_sync 00:07:36.892 EAL: No shared files mode enabled, IPC is disabled 00:07:36.892 EAL: Heap on socket 0 was expanded by 258MB 00:07:36.892 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.892 EAL: request: mp_malloc_sync 00:07:36.892 EAL: No shared files mode enabled, IPC is disabled 00:07:36.892 EAL: Heap on socket 0 was shrunk by 258MB 00:07:36.892 EAL: Trying to obtain current memory policy. 00:07:36.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:36.892 EAL: Restoring previous memory policy: 4 00:07:36.892 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.892 EAL: request: mp_malloc_sync 00:07:36.892 EAL: No shared files mode enabled, IPC is disabled 00:07:36.892 EAL: Heap on socket 0 was expanded by 514MB 00:07:36.892 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.152 EAL: request: mp_malloc_sync 00:07:37.152 EAL: No shared files mode enabled, IPC is disabled 00:07:37.152 EAL: Heap on socket 0 was shrunk by 514MB 00:07:37.152 EAL: Trying to obtain current memory policy. 00:07:37.152 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.152 EAL: Restoring previous memory policy: 4 00:07:37.152 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.152 EAL: request: mp_malloc_sync 00:07:37.152 EAL: No shared files mode enabled, IPC is disabled 00:07:37.152 EAL: Heap on socket 0 was expanded by 1026MB 00:07:37.152 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.412 EAL: request: mp_malloc_sync 00:07:37.412 EAL: No shared files mode enabled, IPC is disabled 00:07:37.412 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:37.412 passed 00:07:37.412 00:07:37.412 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.412 suites 1 1 n/a 0 0 00:07:37.412 tests 2 2 2 0 0 00:07:37.412 asserts 497 497 497 0 n/a 00:07:37.412 00:07:37.412 Elapsed time = 0.685 seconds 00:07:37.412 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.412 EAL: request: mp_malloc_sync 00:07:37.412 EAL: No shared files mode enabled, IPC is disabled 00:07:37.412 EAL: Heap on socket 0 was shrunk by 2MB 00:07:37.412 EAL: No shared files mode enabled, IPC is disabled 00:07:37.412 EAL: No shared files mode enabled, IPC is disabled 00:07:37.412 EAL: No shared files mode enabled, IPC is disabled 00:07:37.412 00:07:37.412 real 0m0.833s 00:07:37.412 user 0m0.436s 00:07:37.412 sys 0m0.372s 00:07:37.412 06:17:57 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:37.412 06:17:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:37.412 ************************************ 00:07:37.412 END TEST env_vtophys 00:07:37.412 ************************************ 00:07:37.412 06:17:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:37.412 06:17:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:37.412 06:17:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:37.412 06:17:57 env -- common/autotest_common.sh@10 -- # set +x 00:07:37.412 ************************************ 00:07:37.412 START TEST env_pci 00:07:37.412 ************************************ 00:07:37.412 06:17:57 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:37.412 00:07:37.412 00:07:37.412 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.412 http://cunit.sourceforge.net/ 00:07:37.412 00:07:37.412 00:07:37.412 Suite: pci 00:07:37.412 Test: pci_hook ...[2024-11-20 06:17:57.620010] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2593716 has claimed it 00:07:37.412 EAL: Cannot find device (10000:00:01.0) 00:07:37.412 EAL: Failed to attach device on primary process 00:07:37.412 passed 00:07:37.412 00:07:37.412 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.412 suites 1 1 n/a 0 0 00:07:37.412 tests 1 1 1 0 0 00:07:37.412 asserts 25 25 25 0 n/a 00:07:37.412 00:07:37.412 Elapsed time = 0.030 seconds 00:07:37.412 00:07:37.412 real 0m0.052s 00:07:37.412 user 0m0.022s 00:07:37.412 sys 0m0.029s 00:07:37.412 06:17:57 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:37.412 06:17:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:37.412 ************************************ 00:07:37.412 END TEST env_pci 00:07:37.412 ************************************ 00:07:37.673 06:17:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:37.673 06:17:57 env -- env/env.sh@15 -- # uname 00:07:37.673 06:17:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:37.673 06:17:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:37.673 06:17:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:37.673 06:17:57 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:37.673 06:17:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:37.673 06:17:57 env -- common/autotest_common.sh@10 -- # set +x 00:07:37.673 ************************************ 00:07:37.673 START TEST env_dpdk_post_init 00:07:37.673 ************************************ 00:07:37.673 06:17:57 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:37.673 EAL: Detected CPU lcores: 128 00:07:37.673 EAL: Detected NUMA nodes: 2 00:07:37.673 EAL: Detected shared linkage of DPDK 00:07:37.673 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:37.673 EAL: Selected IOVA mode 'VA' 00:07:37.673 EAL: VFIO support initialized 00:07:37.673 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:37.673 EAL: Using IOMMU type 1 (Type 1) 00:07:37.933 EAL: Ignore mapping IO port bar(1) 00:07:37.933 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:07:38.194 EAL: Ignore mapping IO port bar(1) 00:07:38.194 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:07:38.194 EAL: Ignore mapping IO port bar(1) 00:07:38.454 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:07:38.454 EAL: Ignore mapping IO port bar(1) 00:07:38.715 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:07:38.715 EAL: Ignore mapping IO port bar(1) 00:07:38.976 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:07:38.976 EAL: Ignore mapping IO port bar(1) 00:07:38.976 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:07:39.235 EAL: Ignore mapping IO port bar(1) 00:07:39.235 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:07:39.496 EAL: Ignore mapping IO port bar(1) 00:07:39.496 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:07:39.807 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:07:39.807 EAL: Ignore mapping IO port bar(1) 00:07:40.121 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:07:40.121 EAL: Ignore mapping IO port bar(1) 00:07:40.121 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:07:40.439 EAL: Ignore mapping IO port bar(1) 00:07:40.439 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:07:40.439 EAL: Ignore mapping IO port bar(1) 00:07:40.700 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:07:40.700 EAL: Ignore mapping IO port bar(1) 00:07:40.960 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:07:40.960 EAL: Ignore mapping IO port bar(1) 00:07:41.221 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:07:41.221 EAL: Ignore mapping IO port bar(1) 00:07:41.221 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:07:41.482 EAL: Ignore mapping IO port bar(1) 00:07:41.482 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:07:41.482 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:07:41.482 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:07:41.742 Starting DPDK initialization... 00:07:41.742 Starting SPDK post initialization... 00:07:41.742 SPDK NVMe probe 00:07:41.742 Attaching to 0000:65:00.0 00:07:41.742 Attached to 0000:65:00.0 00:07:41.742 Cleaning up... 00:07:43.656 00:07:43.656 real 0m5.744s 00:07:43.656 user 0m0.111s 00:07:43.656 sys 0m0.192s 00:07:43.656 06:18:03 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.656 06:18:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:43.656 ************************************ 00:07:43.656 END TEST env_dpdk_post_init 00:07:43.656 ************************************ 00:07:43.656 06:18:03 env -- env/env.sh@26 -- # uname 00:07:43.656 06:18:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:43.656 06:18:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:43.656 06:18:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:43.656 06:18:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.656 06:18:03 env -- common/autotest_common.sh@10 -- # set +x 00:07:43.656 ************************************ 00:07:43.656 START TEST env_mem_callbacks 00:07:43.656 ************************************ 00:07:43.656 06:18:03 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:43.656 EAL: Detected CPU lcores: 128 00:07:43.656 EAL: Detected NUMA nodes: 2 00:07:43.656 EAL: Detected shared linkage of DPDK 00:07:43.656 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:43.656 EAL: Selected IOVA mode 'VA' 00:07:43.656 EAL: VFIO support initialized 00:07:43.656 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:43.656 00:07:43.656 00:07:43.656 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.656 http://cunit.sourceforge.net/ 00:07:43.656 00:07:43.656 00:07:43.656 Suite: memory 00:07:43.656 Test: test ... 00:07:43.656 register 0x200000200000 2097152 00:07:43.656 malloc 3145728 00:07:43.656 register 0x200000400000 4194304 00:07:43.656 buf 0x200000500000 len 3145728 PASSED 00:07:43.656 malloc 64 00:07:43.656 buf 0x2000004fff40 len 64 PASSED 00:07:43.656 malloc 4194304 00:07:43.656 register 0x200000800000 6291456 00:07:43.656 buf 0x200000a00000 len 4194304 PASSED 00:07:43.656 free 0x200000500000 3145728 00:07:43.656 free 0x2000004fff40 64 00:07:43.656 unregister 0x200000400000 4194304 PASSED 00:07:43.656 free 0x200000a00000 4194304 00:07:43.657 unregister 0x200000800000 6291456 PASSED 00:07:43.657 malloc 8388608 00:07:43.657 register 0x200000400000 10485760 00:07:43.657 buf 0x200000600000 len 8388608 PASSED 00:07:43.657 free 0x200000600000 8388608 00:07:43.657 unregister 0x200000400000 10485760 PASSED 00:07:43.657 passed 00:07:43.657 00:07:43.657 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.657 suites 1 1 n/a 0 0 00:07:43.657 tests 1 1 1 0 0 00:07:43.657 asserts 15 15 15 0 n/a 00:07:43.657 00:07:43.657 Elapsed time = 0.010 seconds 00:07:43.657 00:07:43.657 real 0m0.067s 00:07:43.657 user 0m0.019s 00:07:43.657 sys 0m0.048s 00:07:43.657 06:18:03 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.657 06:18:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:43.657 ************************************ 00:07:43.657 END TEST env_mem_callbacks 00:07:43.657 ************************************ 00:07:43.657 00:07:43.657 real 0m7.540s 00:07:43.657 user 0m1.057s 00:07:43.657 sys 0m1.047s 00:07:43.657 06:18:03 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.657 06:18:03 env -- common/autotest_common.sh@10 -- # set +x 00:07:43.657 ************************************ 00:07:43.657 END TEST env 00:07:43.657 ************************************ 00:07:43.657 06:18:03 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:43.657 06:18:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:43.657 06:18:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.657 06:18:03 -- common/autotest_common.sh@10 -- # set +x 00:07:43.657 ************************************ 00:07:43.657 START TEST rpc 00:07:43.657 ************************************ 00:07:43.657 06:18:03 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:43.657 * Looking for test storage... 00:07:43.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:43.657 06:18:03 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:43.657 06:18:03 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:43.657 06:18:03 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:43.918 06:18:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.918 06:18:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.918 06:18:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.918 06:18:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.918 06:18:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.918 06:18:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.918 06:18:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.918 06:18:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.918 06:18:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.918 06:18:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.918 06:18:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.918 06:18:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:43.918 06:18:03 rpc -- scripts/common.sh@345 -- # : 1 00:07:43.918 06:18:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.918 06:18:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.918 06:18:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:43.918 06:18:03 rpc -- scripts/common.sh@353 -- # local d=1 00:07:43.918 06:18:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.918 06:18:03 rpc -- scripts/common.sh@355 -- # echo 1 00:07:43.918 06:18:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.918 06:18:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:43.918 06:18:03 rpc -- scripts/common.sh@353 -- # local d=2 00:07:43.918 06:18:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.918 06:18:03 rpc -- scripts/common.sh@355 -- # echo 2 00:07:43.918 06:18:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.918 06:18:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.918 06:18:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.918 06:18:03 rpc -- scripts/common.sh@368 -- # return 0 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:43.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.918 --rc genhtml_branch_coverage=1 00:07:43.918 --rc genhtml_function_coverage=1 00:07:43.918 --rc genhtml_legend=1 00:07:43.918 --rc geninfo_all_blocks=1 00:07:43.918 --rc geninfo_unexecuted_blocks=1 00:07:43.918 00:07:43.918 ' 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:43.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.918 --rc genhtml_branch_coverage=1 00:07:43.918 --rc genhtml_function_coverage=1 00:07:43.918 --rc genhtml_legend=1 00:07:43.918 --rc geninfo_all_blocks=1 00:07:43.918 --rc geninfo_unexecuted_blocks=1 00:07:43.918 00:07:43.918 ' 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:43.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.918 --rc genhtml_branch_coverage=1 00:07:43.918 --rc genhtml_function_coverage=1 00:07:43.918 --rc genhtml_legend=1 00:07:43.918 --rc geninfo_all_blocks=1 00:07:43.918 --rc geninfo_unexecuted_blocks=1 00:07:43.918 00:07:43.918 ' 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:43.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.918 --rc genhtml_branch_coverage=1 00:07:43.918 --rc genhtml_function_coverage=1 00:07:43.918 --rc genhtml_legend=1 00:07:43.918 --rc geninfo_all_blocks=1 00:07:43.918 --rc geninfo_unexecuted_blocks=1 00:07:43.918 00:07:43.918 ' 00:07:43.918 06:18:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2595171 00:07:43.918 06:18:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:43.918 06:18:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2595171 00:07:43.918 06:18:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@833 -- # '[' -z 2595171 ']' 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:43.918 06:18:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.919 [2024-11-20 06:18:04.036280] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:07:43.919 [2024-11-20 06:18:04.036351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595171 ] 00:07:43.919 [2024-11-20 06:18:04.126859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.919 [2024-11-20 06:18:04.178753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:43.919 [2024-11-20 06:18:04.178804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2595171' to capture a snapshot of events at runtime. 00:07:43.919 [2024-11-20 06:18:04.178813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.919 [2024-11-20 06:18:04.178821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.919 [2024-11-20 06:18:04.178828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2595171 for offline analysis/debug. 00:07:43.919 [2024-11-20 06:18:04.179605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.861 06:18:04 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:44.861 06:18:04 rpc -- common/autotest_common.sh@866 -- # return 0 00:07:44.861 06:18:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:44.861 06:18:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:44.861 06:18:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:44.861 06:18:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:44.861 06:18:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:44.861 06:18:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.861 06:18:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.861 ************************************ 00:07:44.861 START TEST rpc_integrity 00:07:44.861 ************************************ 00:07:44.861 06:18:04 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:44.861 06:18:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:44.861 06:18:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.861 06:18:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:44.861 06:18:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.861 06:18:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:44.861 06:18:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:44.861 06:18:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:44.861 06:18:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:44.861 06:18:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.861 06:18:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:44.861 06:18:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.861 06:18:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:44.861 06:18:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:44.861 06:18:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.861 06:18:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:44.861 06:18:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.861 06:18:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:44.861 { 00:07:44.861 "name": "Malloc0", 00:07:44.861 "aliases": [ 00:07:44.861 "a6086e89-45b8-41f2-97f9-cb155b0c0926" 00:07:44.861 ], 00:07:44.861 "product_name": "Malloc disk", 00:07:44.861 "block_size": 512, 00:07:44.861 "num_blocks": 16384, 00:07:44.861 "uuid": "a6086e89-45b8-41f2-97f9-cb155b0c0926", 00:07:44.861 "assigned_rate_limits": { 00:07:44.861 "rw_ios_per_sec": 0, 00:07:44.861 "rw_mbytes_per_sec": 0, 00:07:44.861 "r_mbytes_per_sec": 0, 00:07:44.861 "w_mbytes_per_sec": 0 00:07:44.861 }, 00:07:44.861 "claimed": false, 00:07:44.861 "zoned": false, 00:07:44.861 "supported_io_types": { 00:07:44.861 "read": true, 00:07:44.861 "write": true, 00:07:44.861 "unmap": true, 00:07:44.861 "flush": true, 00:07:44.861 "reset": true, 00:07:44.861 "nvme_admin": false, 00:07:44.861 "nvme_io": false, 00:07:44.861 "nvme_io_md": false, 00:07:44.861 "write_zeroes": true, 00:07:44.861 "zcopy": true, 00:07:44.861 "get_zone_info": false, 00:07:44.861 "zone_management": false, 00:07:44.861 "zone_append": false, 00:07:44.861 "compare": false, 00:07:44.861 "compare_and_write": false, 00:07:44.861 "abort": true, 00:07:44.861 "seek_hole": false, 00:07:44.861 "seek_data": false, 00:07:44.862 "copy": true, 00:07:44.862 "nvme_iov_md": false 00:07:44.862 }, 00:07:44.862 "memory_domains": [ 00:07:44.862 { 00:07:44.862 "dma_device_id": "system", 00:07:44.862 "dma_device_type": 1 00:07:44.862 }, 00:07:44.862 { 00:07:44.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.862 "dma_device_type": 2 00:07:44.862 } 00:07:44.862 ], 00:07:44.862 "driver_specific": {} 00:07:44.862 } 00:07:44.862 ]' 00:07:44.862 06:18:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 [2024-11-20 06:18:05.028444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:44.862 [2024-11-20 06:18:05.028490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.862 [2024-11-20 06:18:05.028507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xedf800 00:07:44.862 [2024-11-20 06:18:05.028515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.862 [2024-11-20 06:18:05.030086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.862 [2024-11-20 06:18:05.030124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:44.862 Passthru0 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:44.862 { 00:07:44.862 "name": "Malloc0", 00:07:44.862 "aliases": [ 00:07:44.862 "a6086e89-45b8-41f2-97f9-cb155b0c0926" 00:07:44.862 ], 00:07:44.862 "product_name": "Malloc disk", 00:07:44.862 "block_size": 512, 00:07:44.862 "num_blocks": 16384, 00:07:44.862 "uuid": "a6086e89-45b8-41f2-97f9-cb155b0c0926", 00:07:44.862 "assigned_rate_limits": { 00:07:44.862 "rw_ios_per_sec": 0, 00:07:44.862 "rw_mbytes_per_sec": 0, 00:07:44.862 "r_mbytes_per_sec": 0, 00:07:44.862 "w_mbytes_per_sec": 0 00:07:44.862 }, 00:07:44.862 "claimed": true, 00:07:44.862 "claim_type": "exclusive_write", 00:07:44.862 "zoned": false, 00:07:44.862 "supported_io_types": { 00:07:44.862 "read": true, 00:07:44.862 "write": true, 00:07:44.862 "unmap": true, 00:07:44.862 "flush": true, 00:07:44.862 "reset": true, 00:07:44.862 "nvme_admin": false, 00:07:44.862 "nvme_io": false, 00:07:44.862 "nvme_io_md": false, 00:07:44.862 "write_zeroes": true, 00:07:44.862 "zcopy": true, 00:07:44.862 "get_zone_info": false, 00:07:44.862 "zone_management": false, 00:07:44.862 "zone_append": false, 00:07:44.862 "compare": false, 00:07:44.862 "compare_and_write": false, 00:07:44.862 "abort": true, 00:07:44.862 "seek_hole": false, 00:07:44.862 "seek_data": false, 00:07:44.862 "copy": true, 00:07:44.862 "nvme_iov_md": false 00:07:44.862 }, 00:07:44.862 "memory_domains": [ 00:07:44.862 { 00:07:44.862 "dma_device_id": "system", 00:07:44.862 "dma_device_type": 1 00:07:44.862 }, 00:07:44.862 { 00:07:44.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.862 "dma_device_type": 2 00:07:44.862 } 00:07:44.862 ], 00:07:44.862 "driver_specific": {} 00:07:44.862 }, 00:07:44.862 { 00:07:44.862 "name": "Passthru0", 00:07:44.862 "aliases": [ 00:07:44.862 "7bcdca1f-50e6-506d-bd5a-8b074223d55f" 00:07:44.862 ], 00:07:44.862 "product_name": "passthru", 00:07:44.862 "block_size": 512, 00:07:44.862 "num_blocks": 16384, 00:07:44.862 "uuid": "7bcdca1f-50e6-506d-bd5a-8b074223d55f", 00:07:44.862 "assigned_rate_limits": { 00:07:44.862 "rw_ios_per_sec": 0, 00:07:44.862 "rw_mbytes_per_sec": 0, 00:07:44.862 "r_mbytes_per_sec": 0, 00:07:44.862 "w_mbytes_per_sec": 0 00:07:44.862 }, 00:07:44.862 "claimed": false, 00:07:44.862 "zoned": false, 00:07:44.862 "supported_io_types": { 00:07:44.862 "read": true, 00:07:44.862 "write": true, 00:07:44.862 "unmap": true, 00:07:44.862 "flush": true, 00:07:44.862 "reset": true, 00:07:44.862 "nvme_admin": false, 00:07:44.862 "nvme_io": false, 00:07:44.862 "nvme_io_md": false, 00:07:44.862 "write_zeroes": true, 00:07:44.862 "zcopy": true, 00:07:44.862 "get_zone_info": false, 00:07:44.862 "zone_management": false, 00:07:44.862 "zone_append": false, 00:07:44.862 "compare": false, 00:07:44.862 "compare_and_write": false, 00:07:44.862 "abort": true, 00:07:44.862 "seek_hole": false, 00:07:44.862 "seek_data": false, 00:07:44.862 "copy": true, 00:07:44.862 "nvme_iov_md": false 00:07:44.862 }, 00:07:44.862 "memory_domains": [ 00:07:44.862 { 00:07:44.862 "dma_device_id": "system", 00:07:44.862 "dma_device_type": 1 00:07:44.862 }, 00:07:44.862 { 00:07:44.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.862 "dma_device_type": 2 00:07:44.862 } 00:07:44.862 ], 00:07:44.862 "driver_specific": { 00:07:44.862 "passthru": { 00:07:44.862 "name": "Passthru0", 00:07:44.862 "base_bdev_name": "Malloc0" 00:07:44.862 } 00:07:44.862 } 00:07:44.862 } 00:07:44.862 ]' 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:44.862 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:45.123 06:18:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:45.123 00:07:45.123 real 0m0.290s 00:07:45.123 user 0m0.178s 00:07:45.123 sys 0m0.045s 00:07:45.123 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.123 06:18:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.123 ************************************ 00:07:45.123 END TEST rpc_integrity 00:07:45.123 ************************************ 00:07:45.123 06:18:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:45.123 06:18:05 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:45.123 06:18:05 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.123 06:18:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.123 ************************************ 00:07:45.123 START TEST rpc_plugins 00:07:45.123 ************************************ 00:07:45.123 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:07:45.123 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:45.123 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.123 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:45.123 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.123 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:45.123 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:45.123 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.123 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:45.123 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.123 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:45.123 { 00:07:45.123 "name": "Malloc1", 00:07:45.123 "aliases": [ 00:07:45.123 "64e2a1d8-0491-4bd4-8bbc-714f893afae1" 00:07:45.123 ], 00:07:45.123 "product_name": "Malloc disk", 00:07:45.123 "block_size": 4096, 00:07:45.123 "num_blocks": 256, 00:07:45.123 "uuid": "64e2a1d8-0491-4bd4-8bbc-714f893afae1", 00:07:45.123 "assigned_rate_limits": { 00:07:45.123 "rw_ios_per_sec": 0, 00:07:45.123 "rw_mbytes_per_sec": 0, 00:07:45.123 "r_mbytes_per_sec": 0, 00:07:45.123 "w_mbytes_per_sec": 0 00:07:45.123 }, 00:07:45.123 "claimed": false, 00:07:45.123 "zoned": false, 00:07:45.123 "supported_io_types": { 00:07:45.123 "read": true, 00:07:45.123 "write": true, 00:07:45.123 "unmap": true, 00:07:45.123 "flush": true, 00:07:45.123 "reset": true, 00:07:45.124 "nvme_admin": false, 00:07:45.124 "nvme_io": false, 00:07:45.124 "nvme_io_md": false, 00:07:45.124 "write_zeroes": true, 00:07:45.124 "zcopy": true, 00:07:45.124 "get_zone_info": false, 00:07:45.124 "zone_management": false, 00:07:45.124 "zone_append": false, 00:07:45.124 "compare": false, 00:07:45.124 "compare_and_write": false, 00:07:45.124 "abort": true, 00:07:45.124 "seek_hole": false, 00:07:45.124 "seek_data": false, 00:07:45.124 "copy": true, 00:07:45.124 "nvme_iov_md": false 00:07:45.124 }, 00:07:45.124 "memory_domains": [ 00:07:45.124 { 00:07:45.124 "dma_device_id": "system", 00:07:45.124 "dma_device_type": 1 00:07:45.124 }, 00:07:45.124 { 00:07:45.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.124 "dma_device_type": 2 00:07:45.124 } 00:07:45.124 ], 00:07:45.124 "driver_specific": {} 00:07:45.124 } 00:07:45.124 ]' 00:07:45.124 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:45.124 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:45.124 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:45.124 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.124 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:45.124 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.124 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:45.124 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.124 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:45.124 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.124 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:45.124 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:45.384 06:18:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:45.384 00:07:45.384 real 0m0.149s 00:07:45.384 user 0m0.094s 00:07:45.384 sys 0m0.021s 00:07:45.384 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.384 06:18:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:45.384 ************************************ 00:07:45.384 END TEST rpc_plugins 00:07:45.384 ************************************ 00:07:45.384 06:18:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:45.384 06:18:05 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:45.384 06:18:05 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.384 06:18:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.384 ************************************ 00:07:45.384 START TEST rpc_trace_cmd_test 00:07:45.384 ************************************ 00:07:45.384 06:18:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:07:45.384 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:45.384 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:45.384 06:18:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.384 06:18:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.384 06:18:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.384 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:45.384 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2595171", 00:07:45.384 "tpoint_group_mask": "0x8", 00:07:45.384 "iscsi_conn": { 00:07:45.384 "mask": "0x2", 00:07:45.384 "tpoint_mask": "0x0" 00:07:45.384 }, 00:07:45.384 "scsi": { 00:07:45.384 "mask": "0x4", 00:07:45.384 "tpoint_mask": "0x0" 00:07:45.384 }, 00:07:45.384 "bdev": { 00:07:45.384 "mask": "0x8", 00:07:45.384 "tpoint_mask": "0xffffffffffffffff" 00:07:45.384 }, 00:07:45.384 "nvmf_rdma": { 00:07:45.384 "mask": "0x10", 00:07:45.384 "tpoint_mask": "0x0" 00:07:45.384 }, 00:07:45.384 "nvmf_tcp": { 00:07:45.384 "mask": "0x20", 00:07:45.384 "tpoint_mask": "0x0" 00:07:45.384 }, 00:07:45.384 "ftl": { 00:07:45.384 "mask": "0x40", 00:07:45.384 "tpoint_mask": "0x0" 00:07:45.384 }, 00:07:45.384 "blobfs": { 00:07:45.384 "mask": "0x80", 00:07:45.384 "tpoint_mask": "0x0" 00:07:45.384 }, 00:07:45.385 "dsa": { 00:07:45.385 "mask": "0x200", 00:07:45.385 "tpoint_mask": "0x0" 00:07:45.385 }, 00:07:45.385 "thread": { 00:07:45.385 "mask": "0x400", 00:07:45.385 "tpoint_mask": "0x0" 00:07:45.385 }, 00:07:45.385 "nvme_pcie": { 00:07:45.385 "mask": "0x800", 00:07:45.385 "tpoint_mask": "0x0" 00:07:45.385 }, 00:07:45.385 "iaa": { 00:07:45.385 "mask": "0x1000", 00:07:45.385 "tpoint_mask": "0x0" 00:07:45.385 }, 00:07:45.385 "nvme_tcp": { 00:07:45.385 "mask": "0x2000", 00:07:45.385 "tpoint_mask": "0x0" 00:07:45.385 }, 00:07:45.385 "bdev_nvme": { 00:07:45.385 "mask": "0x4000", 00:07:45.385 "tpoint_mask": "0x0" 00:07:45.385 }, 00:07:45.385 "sock": { 00:07:45.385 "mask": "0x8000", 00:07:45.385 "tpoint_mask": "0x0" 00:07:45.385 }, 00:07:45.385 "blob": { 00:07:45.385 "mask": "0x10000", 00:07:45.385 "tpoint_mask": "0x0" 00:07:45.385 }, 00:07:45.385 "bdev_raid": { 00:07:45.385 "mask": "0x20000", 00:07:45.385 "tpoint_mask": "0x0" 00:07:45.385 }, 00:07:45.385 "scheduler": { 00:07:45.385 "mask": "0x40000", 00:07:45.385 "tpoint_mask": "0x0" 00:07:45.385 } 00:07:45.385 }' 00:07:45.385 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:45.385 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:45.385 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:45.385 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:45.385 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:45.385 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:45.385 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:45.645 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:45.645 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:45.645 06:18:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:45.645 00:07:45.645 real 0m0.251s 00:07:45.645 user 0m0.206s 00:07:45.645 sys 0m0.037s 00:07:45.645 06:18:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.645 06:18:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.645 ************************************ 00:07:45.645 END TEST rpc_trace_cmd_test 00:07:45.645 ************************************ 00:07:45.645 06:18:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:45.645 06:18:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:45.645 06:18:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:45.645 06:18:05 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:45.645 06:18:05 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.645 06:18:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.645 ************************************ 00:07:45.645 START TEST rpc_daemon_integrity 00:07:45.645 ************************************ 00:07:45.645 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:45.645 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:45.645 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.645 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.645 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:45.646 { 00:07:45.646 "name": "Malloc2", 00:07:45.646 "aliases": [ 00:07:45.646 "21483a25-2758-46ff-8546-2f37a3d259cf" 00:07:45.646 ], 00:07:45.646 "product_name": "Malloc disk", 00:07:45.646 "block_size": 512, 00:07:45.646 "num_blocks": 16384, 00:07:45.646 "uuid": "21483a25-2758-46ff-8546-2f37a3d259cf", 00:07:45.646 "assigned_rate_limits": { 00:07:45.646 "rw_ios_per_sec": 0, 00:07:45.646 "rw_mbytes_per_sec": 0, 00:07:45.646 "r_mbytes_per_sec": 0, 00:07:45.646 "w_mbytes_per_sec": 0 00:07:45.646 }, 00:07:45.646 "claimed": false, 00:07:45.646 "zoned": false, 00:07:45.646 "supported_io_types": { 00:07:45.646 "read": true, 00:07:45.646 "write": true, 00:07:45.646 "unmap": true, 00:07:45.646 "flush": true, 00:07:45.646 "reset": true, 00:07:45.646 "nvme_admin": false, 00:07:45.646 "nvme_io": false, 00:07:45.646 "nvme_io_md": false, 00:07:45.646 "write_zeroes": true, 00:07:45.646 "zcopy": true, 00:07:45.646 "get_zone_info": false, 00:07:45.646 "zone_management": false, 00:07:45.646 "zone_append": false, 00:07:45.646 "compare": false, 00:07:45.646 "compare_and_write": false, 00:07:45.646 "abort": true, 00:07:45.646 "seek_hole": false, 00:07:45.646 "seek_data": false, 00:07:45.646 "copy": true, 00:07:45.646 "nvme_iov_md": false 00:07:45.646 }, 00:07:45.646 "memory_domains": [ 00:07:45.646 { 00:07:45.646 "dma_device_id": "system", 00:07:45.646 "dma_device_type": 1 00:07:45.646 }, 00:07:45.646 { 00:07:45.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.646 "dma_device_type": 2 00:07:45.646 } 00:07:45.646 ], 00:07:45.646 "driver_specific": {} 00:07:45.646 } 00:07:45.646 ]' 00:07:45.646 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.907 [2024-11-20 06:18:05.954969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:45.907 [2024-11-20 06:18:05.955012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.907 [2024-11-20 06:18:05.955030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd9c920 00:07:45.907 [2024-11-20 06:18:05.955039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.907 [2024-11-20 06:18:05.956569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.907 [2024-11-20 06:18:05.956606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:45.907 Passthru0 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:45.907 { 00:07:45.907 "name": "Malloc2", 00:07:45.907 "aliases": [ 00:07:45.907 "21483a25-2758-46ff-8546-2f37a3d259cf" 00:07:45.907 ], 00:07:45.907 "product_name": "Malloc disk", 00:07:45.907 "block_size": 512, 00:07:45.907 "num_blocks": 16384, 00:07:45.907 "uuid": "21483a25-2758-46ff-8546-2f37a3d259cf", 00:07:45.907 "assigned_rate_limits": { 00:07:45.907 "rw_ios_per_sec": 0, 00:07:45.907 "rw_mbytes_per_sec": 0, 00:07:45.907 "r_mbytes_per_sec": 0, 00:07:45.907 "w_mbytes_per_sec": 0 00:07:45.907 }, 00:07:45.907 "claimed": true, 00:07:45.907 "claim_type": "exclusive_write", 00:07:45.907 "zoned": false, 00:07:45.907 "supported_io_types": { 00:07:45.907 "read": true, 00:07:45.907 "write": true, 00:07:45.907 "unmap": true, 00:07:45.907 "flush": true, 00:07:45.907 "reset": true, 00:07:45.907 "nvme_admin": false, 00:07:45.907 "nvme_io": false, 00:07:45.907 "nvme_io_md": false, 00:07:45.907 "write_zeroes": true, 00:07:45.907 "zcopy": true, 00:07:45.907 "get_zone_info": false, 00:07:45.907 "zone_management": false, 00:07:45.907 "zone_append": false, 00:07:45.907 "compare": false, 00:07:45.907 "compare_and_write": false, 00:07:45.907 "abort": true, 00:07:45.907 "seek_hole": false, 00:07:45.907 "seek_data": false, 00:07:45.907 "copy": true, 00:07:45.907 "nvme_iov_md": false 00:07:45.907 }, 00:07:45.907 "memory_domains": [ 00:07:45.907 { 00:07:45.907 "dma_device_id": "system", 00:07:45.907 "dma_device_type": 1 00:07:45.907 }, 00:07:45.907 { 00:07:45.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.907 "dma_device_type": 2 00:07:45.907 } 00:07:45.907 ], 00:07:45.907 "driver_specific": {} 00:07:45.907 }, 00:07:45.907 { 00:07:45.907 "name": "Passthru0", 00:07:45.907 "aliases": [ 00:07:45.907 "9b331641-6626-5a5b-ab70-f285668f8dcf" 00:07:45.907 ], 00:07:45.907 "product_name": "passthru", 00:07:45.907 "block_size": 512, 00:07:45.907 "num_blocks": 16384, 00:07:45.907 "uuid": "9b331641-6626-5a5b-ab70-f285668f8dcf", 00:07:45.907 "assigned_rate_limits": { 00:07:45.907 "rw_ios_per_sec": 0, 00:07:45.907 "rw_mbytes_per_sec": 0, 00:07:45.907 "r_mbytes_per_sec": 0, 00:07:45.907 "w_mbytes_per_sec": 0 00:07:45.907 }, 00:07:45.907 "claimed": false, 00:07:45.907 "zoned": false, 00:07:45.907 "supported_io_types": { 00:07:45.907 "read": true, 00:07:45.907 "write": true, 00:07:45.907 "unmap": true, 00:07:45.907 "flush": true, 00:07:45.907 "reset": true, 00:07:45.907 "nvme_admin": false, 00:07:45.907 "nvme_io": false, 00:07:45.907 "nvme_io_md": false, 00:07:45.907 "write_zeroes": true, 00:07:45.907 "zcopy": true, 00:07:45.907 "get_zone_info": false, 00:07:45.907 "zone_management": false, 00:07:45.907 "zone_append": false, 00:07:45.907 "compare": false, 00:07:45.907 "compare_and_write": false, 00:07:45.907 "abort": true, 00:07:45.907 "seek_hole": false, 00:07:45.907 "seek_data": false, 00:07:45.907 "copy": true, 00:07:45.907 "nvme_iov_md": false 00:07:45.907 }, 00:07:45.907 "memory_domains": [ 00:07:45.907 { 00:07:45.907 "dma_device_id": "system", 00:07:45.907 "dma_device_type": 1 00:07:45.907 }, 00:07:45.907 { 00:07:45.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.907 "dma_device_type": 2 00:07:45.907 } 00:07:45.907 ], 00:07:45.907 "driver_specific": { 00:07:45.907 "passthru": { 00:07:45.907 "name": "Passthru0", 00:07:45.907 "base_bdev_name": "Malloc2" 00:07:45.907 } 00:07:45.907 } 00:07:45.907 } 00:07:45.907 ]' 00:07:45.907 06:18:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:45.907 06:18:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:45.907 00:07:45.908 real 0m0.304s 00:07:45.908 user 0m0.189s 00:07:45.908 sys 0m0.051s 00:07:45.908 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.908 06:18:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:45.908 ************************************ 00:07:45.908 END TEST rpc_daemon_integrity 00:07:45.908 ************************************ 00:07:45.908 06:18:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:45.908 06:18:06 rpc -- rpc/rpc.sh@84 -- # killprocess 2595171 00:07:45.908 06:18:06 rpc -- common/autotest_common.sh@952 -- # '[' -z 2595171 ']' 00:07:45.908 06:18:06 rpc -- common/autotest_common.sh@956 -- # kill -0 2595171 00:07:45.908 06:18:06 rpc -- common/autotest_common.sh@957 -- # uname 00:07:45.908 06:18:06 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:45.908 06:18:06 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2595171 00:07:46.168 06:18:06 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.168 06:18:06 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.168 06:18:06 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2595171' 00:07:46.168 killing process with pid 2595171 00:07:46.168 06:18:06 rpc -- common/autotest_common.sh@971 -- # kill 2595171 00:07:46.168 06:18:06 rpc -- common/autotest_common.sh@976 -- # wait 2595171 00:07:46.428 00:07:46.428 real 0m2.691s 00:07:46.428 user 0m3.430s 00:07:46.428 sys 0m0.832s 00:07:46.428 06:18:06 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.428 06:18:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.428 ************************************ 00:07:46.428 END TEST rpc 00:07:46.428 ************************************ 00:07:46.428 06:18:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:46.428 06:18:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:46.428 06:18:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.428 06:18:06 -- common/autotest_common.sh@10 -- # set +x 00:07:46.428 ************************************ 00:07:46.428 START TEST skip_rpc 00:07:46.428 ************************************ 00:07:46.428 06:18:06 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:46.428 * Looking for test storage... 00:07:46.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:46.428 06:18:06 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:46.428 06:18:06 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:46.428 06:18:06 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:46.689 06:18:06 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.689 06:18:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:46.689 06:18:06 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.689 06:18:06 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:46.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.689 --rc genhtml_branch_coverage=1 00:07:46.689 --rc genhtml_function_coverage=1 00:07:46.689 --rc genhtml_legend=1 00:07:46.689 --rc geninfo_all_blocks=1 00:07:46.689 --rc geninfo_unexecuted_blocks=1 00:07:46.689 00:07:46.689 ' 00:07:46.689 06:18:06 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:46.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.689 --rc genhtml_branch_coverage=1 00:07:46.689 --rc genhtml_function_coverage=1 00:07:46.689 --rc genhtml_legend=1 00:07:46.689 --rc geninfo_all_blocks=1 00:07:46.689 --rc geninfo_unexecuted_blocks=1 00:07:46.689 00:07:46.689 ' 00:07:46.689 06:18:06 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:46.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.689 --rc genhtml_branch_coverage=1 00:07:46.689 --rc genhtml_function_coverage=1 00:07:46.689 --rc genhtml_legend=1 00:07:46.689 --rc geninfo_all_blocks=1 00:07:46.689 --rc geninfo_unexecuted_blocks=1 00:07:46.689 00:07:46.689 ' 00:07:46.689 06:18:06 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:46.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.689 --rc genhtml_branch_coverage=1 00:07:46.689 --rc genhtml_function_coverage=1 00:07:46.689 --rc genhtml_legend=1 00:07:46.689 --rc geninfo_all_blocks=1 00:07:46.689 --rc geninfo_unexecuted_blocks=1 00:07:46.689 00:07:46.689 ' 00:07:46.689 06:18:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:46.689 06:18:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:46.689 06:18:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:46.689 06:18:06 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:46.689 06:18:06 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.689 06:18:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 ************************************ 00:07:46.689 START TEST skip_rpc 00:07:46.689 ************************************ 00:07:46.689 06:18:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:07:46.689 06:18:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2596018 00:07:46.689 06:18:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:46.689 06:18:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:46.690 06:18:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:46.690 [2024-11-20 06:18:06.850060] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:07:46.690 [2024-11-20 06:18:06.850123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596018 ] 00:07:46.690 [2024-11-20 06:18:06.940388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.950 [2024-11-20 06:18:06.993301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2596018 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2596018 ']' 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2596018 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2596018 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2596018' 00:07:52.239 killing process with pid 2596018 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2596018 00:07:52.239 06:18:11 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2596018 00:07:52.239 00:07:52.239 real 0m5.267s 00:07:52.239 user 0m5.025s 00:07:52.239 sys 0m0.286s 00:07:52.239 06:18:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.239 06:18:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.239 ************************************ 00:07:52.239 END TEST skip_rpc 00:07:52.239 ************************************ 00:07:52.239 06:18:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:52.239 06:18:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:52.239 06:18:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.239 06:18:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.239 ************************************ 00:07:52.239 START TEST skip_rpc_with_json 00:07:52.239 ************************************ 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2597521 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2597521 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2597521 ']' 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:52.239 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:52.239 [2024-11-20 06:18:12.194582] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:07:52.239 [2024-11-20 06:18:12.194634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2597521 ] 00:07:52.239 [2024-11-20 06:18:12.279280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.239 [2024-11-20 06:18:12.310095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.810 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:52.810 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:52.811 [2024-11-20 06:18:12.982789] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:52.811 request: 00:07:52.811 { 00:07:52.811 "trtype": "tcp", 00:07:52.811 "method": "nvmf_get_transports", 00:07:52.811 "req_id": 1 00:07:52.811 } 00:07:52.811 Got JSON-RPC error response 00:07:52.811 response: 00:07:52.811 { 00:07:52.811 "code": -19, 00:07:52.811 "message": "No such device" 00:07:52.811 } 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:52.811 [2024-11-20 06:18:12.994890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.811 06:18:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:53.072 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.072 06:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:53.072 { 00:07:53.072 "subsystems": [ 00:07:53.072 { 00:07:53.072 "subsystem": "fsdev", 00:07:53.072 "config": [ 00:07:53.072 { 00:07:53.072 "method": "fsdev_set_opts", 00:07:53.072 "params": { 00:07:53.072 "fsdev_io_pool_size": 65535, 00:07:53.072 "fsdev_io_cache_size": 256 00:07:53.072 } 00:07:53.072 } 00:07:53.072 ] 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "subsystem": "vfio_user_target", 00:07:53.072 "config": null 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "subsystem": "keyring", 00:07:53.072 "config": [] 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "subsystem": "iobuf", 00:07:53.072 "config": [ 00:07:53.072 { 00:07:53.072 "method": "iobuf_set_options", 00:07:53.072 "params": { 00:07:53.072 "small_pool_count": 8192, 00:07:53.072 "large_pool_count": 1024, 00:07:53.072 "small_bufsize": 8192, 00:07:53.072 "large_bufsize": 135168, 00:07:53.072 "enable_numa": false 00:07:53.072 } 00:07:53.072 } 00:07:53.072 ] 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "subsystem": "sock", 00:07:53.072 "config": [ 00:07:53.072 { 00:07:53.072 "method": "sock_set_default_impl", 00:07:53.072 "params": { 00:07:53.072 "impl_name": "posix" 00:07:53.072 } 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "method": "sock_impl_set_options", 00:07:53.072 "params": { 00:07:53.072 "impl_name": "ssl", 00:07:53.072 "recv_buf_size": 4096, 00:07:53.072 "send_buf_size": 4096, 00:07:53.072 "enable_recv_pipe": true, 00:07:53.072 "enable_quickack": false, 00:07:53.072 "enable_placement_id": 0, 00:07:53.072 "enable_zerocopy_send_server": true, 00:07:53.072 "enable_zerocopy_send_client": false, 00:07:53.072 "zerocopy_threshold": 0, 00:07:53.072 "tls_version": 0, 00:07:53.072 "enable_ktls": false 00:07:53.072 } 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "method": "sock_impl_set_options", 00:07:53.072 "params": { 00:07:53.072 "impl_name": "posix", 00:07:53.072 "recv_buf_size": 2097152, 00:07:53.072 "send_buf_size": 2097152, 00:07:53.072 "enable_recv_pipe": true, 00:07:53.072 "enable_quickack": false, 00:07:53.072 "enable_placement_id": 0, 00:07:53.072 "enable_zerocopy_send_server": true, 00:07:53.072 "enable_zerocopy_send_client": false, 00:07:53.072 "zerocopy_threshold": 0, 00:07:53.072 "tls_version": 0, 00:07:53.072 "enable_ktls": false 00:07:53.072 } 00:07:53.072 } 00:07:53.072 ] 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "subsystem": "vmd", 00:07:53.072 "config": [] 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "subsystem": "accel", 00:07:53.072 "config": [ 00:07:53.072 { 00:07:53.072 "method": "accel_set_options", 00:07:53.072 "params": { 00:07:53.072 "small_cache_size": 128, 00:07:53.072 "large_cache_size": 16, 00:07:53.072 "task_count": 2048, 00:07:53.072 "sequence_count": 2048, 00:07:53.072 "buf_count": 2048 00:07:53.072 } 00:07:53.072 } 00:07:53.072 ] 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "subsystem": "bdev", 00:07:53.072 "config": [ 00:07:53.072 { 00:07:53.072 "method": "bdev_set_options", 00:07:53.072 "params": { 00:07:53.072 "bdev_io_pool_size": 65535, 00:07:53.072 "bdev_io_cache_size": 256, 00:07:53.072 "bdev_auto_examine": true, 00:07:53.072 "iobuf_small_cache_size": 128, 00:07:53.072 "iobuf_large_cache_size": 16 00:07:53.072 } 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "method": "bdev_raid_set_options", 00:07:53.072 "params": { 00:07:53.072 "process_window_size_kb": 1024, 00:07:53.072 "process_max_bandwidth_mb_sec": 0 00:07:53.072 } 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "method": "bdev_iscsi_set_options", 00:07:53.072 "params": { 00:07:53.072 "timeout_sec": 30 00:07:53.072 } 00:07:53.072 }, 00:07:53.072 { 00:07:53.072 "method": "bdev_nvme_set_options", 00:07:53.072 "params": { 00:07:53.072 "action_on_timeout": "none", 00:07:53.072 "timeout_us": 0, 00:07:53.072 "timeout_admin_us": 0, 00:07:53.072 "keep_alive_timeout_ms": 10000, 00:07:53.072 "arbitration_burst": 0, 00:07:53.072 "low_priority_weight": 0, 00:07:53.072 "medium_priority_weight": 0, 00:07:53.072 "high_priority_weight": 0, 00:07:53.072 "nvme_adminq_poll_period_us": 10000, 00:07:53.072 "nvme_ioq_poll_period_us": 0, 00:07:53.072 "io_queue_requests": 0, 00:07:53.072 "delay_cmd_submit": true, 00:07:53.072 "transport_retry_count": 4, 00:07:53.072 "bdev_retry_count": 3, 00:07:53.072 "transport_ack_timeout": 0, 00:07:53.072 "ctrlr_loss_timeout_sec": 0, 00:07:53.072 "reconnect_delay_sec": 0, 00:07:53.072 "fast_io_fail_timeout_sec": 0, 00:07:53.072 "disable_auto_failback": false, 00:07:53.072 "generate_uuids": false, 00:07:53.072 "transport_tos": 0, 00:07:53.072 "nvme_error_stat": false, 00:07:53.072 "rdma_srq_size": 0, 00:07:53.073 "io_path_stat": false, 00:07:53.073 "allow_accel_sequence": false, 00:07:53.073 "rdma_max_cq_size": 0, 00:07:53.073 "rdma_cm_event_timeout_ms": 0, 00:07:53.073 "dhchap_digests": [ 00:07:53.073 "sha256", 00:07:53.073 "sha384", 00:07:53.073 "sha512" 00:07:53.073 ], 00:07:53.073 "dhchap_dhgroups": [ 00:07:53.073 "null", 00:07:53.073 "ffdhe2048", 00:07:53.073 "ffdhe3072", 00:07:53.073 "ffdhe4096", 00:07:53.073 "ffdhe6144", 00:07:53.073 "ffdhe8192" 00:07:53.073 ] 00:07:53.073 } 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "method": "bdev_nvme_set_hotplug", 00:07:53.073 "params": { 00:07:53.073 "period_us": 100000, 00:07:53.073 "enable": false 00:07:53.073 } 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "method": "bdev_wait_for_examine" 00:07:53.073 } 00:07:53.073 ] 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "subsystem": "scsi", 00:07:53.073 "config": null 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "subsystem": "scheduler", 00:07:53.073 "config": [ 00:07:53.073 { 00:07:53.073 "method": "framework_set_scheduler", 00:07:53.073 "params": { 00:07:53.073 "name": "static" 00:07:53.073 } 00:07:53.073 } 00:07:53.073 ] 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "subsystem": "vhost_scsi", 00:07:53.073 "config": [] 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "subsystem": "vhost_blk", 00:07:53.073 "config": [] 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "subsystem": "ublk", 00:07:53.073 "config": [] 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "subsystem": "nbd", 00:07:53.073 "config": [] 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "subsystem": "nvmf", 00:07:53.073 "config": [ 00:07:53.073 { 00:07:53.073 "method": "nvmf_set_config", 00:07:53.073 "params": { 00:07:53.073 "discovery_filter": "match_any", 00:07:53.073 "admin_cmd_passthru": { 00:07:53.073 "identify_ctrlr": false 00:07:53.073 }, 00:07:53.073 "dhchap_digests": [ 00:07:53.073 "sha256", 00:07:53.073 "sha384", 00:07:53.073 "sha512" 00:07:53.073 ], 00:07:53.073 "dhchap_dhgroups": [ 00:07:53.073 "null", 00:07:53.073 "ffdhe2048", 00:07:53.073 "ffdhe3072", 00:07:53.073 "ffdhe4096", 00:07:53.073 "ffdhe6144", 00:07:53.073 "ffdhe8192" 00:07:53.073 ] 00:07:53.073 } 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "method": "nvmf_set_max_subsystems", 00:07:53.073 "params": { 00:07:53.073 "max_subsystems": 1024 00:07:53.073 } 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "method": "nvmf_set_crdt", 00:07:53.073 "params": { 00:07:53.073 "crdt1": 0, 00:07:53.073 "crdt2": 0, 00:07:53.073 "crdt3": 0 00:07:53.073 } 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "method": "nvmf_create_transport", 00:07:53.073 "params": { 00:07:53.073 "trtype": "TCP", 00:07:53.073 "max_queue_depth": 128, 00:07:53.073 "max_io_qpairs_per_ctrlr": 127, 00:07:53.073 "in_capsule_data_size": 4096, 00:07:53.073 "max_io_size": 131072, 00:07:53.073 "io_unit_size": 131072, 00:07:53.073 "max_aq_depth": 128, 00:07:53.073 "num_shared_buffers": 511, 00:07:53.073 "buf_cache_size": 4294967295, 00:07:53.073 "dif_insert_or_strip": false, 00:07:53.073 "zcopy": false, 00:07:53.073 "c2h_success": true, 00:07:53.073 "sock_priority": 0, 00:07:53.073 "abort_timeout_sec": 1, 00:07:53.073 "ack_timeout": 0, 00:07:53.073 "data_wr_pool_size": 0 00:07:53.073 } 00:07:53.073 } 00:07:53.073 ] 00:07:53.073 }, 00:07:53.073 { 00:07:53.073 "subsystem": "iscsi", 00:07:53.073 "config": [ 00:07:53.073 { 00:07:53.073 "method": "iscsi_set_options", 00:07:53.073 "params": { 00:07:53.073 "node_base": "iqn.2016-06.io.spdk", 00:07:53.073 "max_sessions": 128, 00:07:53.073 "max_connections_per_session": 2, 00:07:53.073 "max_queue_depth": 64, 00:07:53.073 "default_time2wait": 2, 00:07:53.073 "default_time2retain": 20, 00:07:53.073 "first_burst_length": 8192, 00:07:53.073 "immediate_data": true, 00:07:53.073 "allow_duplicated_isid": false, 00:07:53.073 "error_recovery_level": 0, 00:07:53.073 "nop_timeout": 60, 00:07:53.073 "nop_in_interval": 30, 00:07:53.073 "disable_chap": false, 00:07:53.073 "require_chap": false, 00:07:53.073 "mutual_chap": false, 00:07:53.073 "chap_group": 0, 00:07:53.073 "max_large_datain_per_connection": 64, 00:07:53.073 "max_r2t_per_connection": 4, 00:07:53.073 "pdu_pool_size": 36864, 00:07:53.073 "immediate_data_pool_size": 16384, 00:07:53.073 "data_out_pool_size": 2048 00:07:53.073 } 00:07:53.073 } 00:07:53.073 ] 00:07:53.073 } 00:07:53.073 ] 00:07:53.073 } 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2597521 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2597521 ']' 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2597521 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2597521 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2597521' 00:07:53.073 killing process with pid 2597521 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2597521 00:07:53.073 06:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2597521 00:07:53.334 06:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2597861 00:07:53.334 06:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:53.334 06:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2597861 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2597861 ']' 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2597861 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2597861 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2597861' 00:07:58.625 killing process with pid 2597861 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2597861 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2597861 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:58.625 00:07:58.625 real 0m6.545s 00:07:58.625 user 0m6.451s 00:07:58.625 sys 0m0.562s 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:58.625 ************************************ 00:07:58.625 END TEST skip_rpc_with_json 00:07:58.625 ************************************ 00:07:58.625 06:18:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:58.625 06:18:18 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.625 06:18:18 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.625 06:18:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.625 ************************************ 00:07:58.625 START TEST skip_rpc_with_delay 00:07:58.625 ************************************ 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:58.625 [2024-11-20 06:18:18.825106] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.625 00:07:58.625 real 0m0.078s 00:07:58.625 user 0m0.050s 00:07:58.625 sys 0m0.027s 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.625 06:18:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:58.625 ************************************ 00:07:58.625 END TEST skip_rpc_with_delay 00:07:58.625 ************************************ 00:07:58.625 06:18:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:58.625 06:18:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:58.625 06:18:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:58.625 06:18:18 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.625 06:18:18 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.625 06:18:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.884 ************************************ 00:07:58.884 START TEST exit_on_failed_rpc_init 00:07:58.884 ************************************ 00:07:58.884 06:18:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:07:58.884 06:18:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2598920 00:07:58.884 06:18:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2598920 00:07:58.884 06:18:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.884 06:18:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2598920 ']' 00:07:58.884 06:18:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.884 06:18:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:58.884 06:18:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.885 06:18:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:58.885 06:18:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:58.885 [2024-11-20 06:18:18.987615] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:07:58.885 [2024-11-20 06:18:18.987676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598920 ] 00:07:58.885 [2024-11-20 06:18:19.074796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.885 [2024-11-20 06:18:19.109451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:59.824 06:18:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:59.824 [2024-11-20 06:18:19.843101] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:07:59.824 [2024-11-20 06:18:19.843154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599111 ] 00:07:59.824 [2024-11-20 06:18:19.930526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.824 [2024-11-20 06:18:19.966371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.824 [2024-11-20 06:18:19.966419] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:59.824 [2024-11-20 06:18:19.966430] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:59.824 [2024-11-20 06:18:19.966436] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2598920 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2598920 ']' 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2598920 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2598920 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2598920' 00:07:59.824 killing process with pid 2598920 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2598920 00:07:59.824 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2598920 00:08:00.085 00:08:00.085 real 0m1.331s 00:08:00.085 user 0m1.569s 00:08:00.085 sys 0m0.384s 00:08:00.085 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.085 06:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:00.085 ************************************ 00:08:00.085 END TEST exit_on_failed_rpc_init 00:08:00.085 ************************************ 00:08:00.085 06:18:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:00.085 00:08:00.085 real 0m13.754s 00:08:00.085 user 0m13.324s 00:08:00.085 sys 0m1.593s 00:08:00.085 06:18:20 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.085 06:18:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.085 ************************************ 00:08:00.085 END TEST skip_rpc 00:08:00.085 ************************************ 00:08:00.085 06:18:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:00.085 06:18:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.085 06:18:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.085 06:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:00.345 ************************************ 00:08:00.345 START TEST rpc_client 00:08:00.345 ************************************ 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:00.345 * Looking for test storage... 00:08:00.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.345 06:18:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.345 --rc genhtml_branch_coverage=1 00:08:00.345 --rc genhtml_function_coverage=1 00:08:00.345 --rc genhtml_legend=1 00:08:00.345 --rc geninfo_all_blocks=1 00:08:00.345 --rc geninfo_unexecuted_blocks=1 00:08:00.345 00:08:00.345 ' 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.345 --rc genhtml_branch_coverage=1 00:08:00.345 --rc genhtml_function_coverage=1 00:08:00.345 --rc genhtml_legend=1 00:08:00.345 --rc geninfo_all_blocks=1 00:08:00.345 --rc geninfo_unexecuted_blocks=1 00:08:00.345 00:08:00.345 ' 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.345 --rc genhtml_branch_coverage=1 00:08:00.345 --rc genhtml_function_coverage=1 00:08:00.345 --rc genhtml_legend=1 00:08:00.345 --rc geninfo_all_blocks=1 00:08:00.345 --rc geninfo_unexecuted_blocks=1 00:08:00.345 00:08:00.345 ' 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.345 --rc genhtml_branch_coverage=1 00:08:00.345 --rc genhtml_function_coverage=1 00:08:00.345 --rc genhtml_legend=1 00:08:00.345 --rc geninfo_all_blocks=1 00:08:00.345 --rc geninfo_unexecuted_blocks=1 00:08:00.345 00:08:00.345 ' 00:08:00.345 06:18:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:00.345 OK 00:08:00.345 06:18:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:00.345 00:08:00.345 real 0m0.225s 00:08:00.345 user 0m0.135s 00:08:00.345 sys 0m0.105s 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.345 06:18:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:00.345 ************************************ 00:08:00.345 END TEST rpc_client 00:08:00.345 ************************************ 00:08:00.607 06:18:20 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:00.607 06:18:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.607 06:18:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.607 06:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:00.607 ************************************ 00:08:00.607 START TEST json_config 00:08:00.607 ************************************ 00:08:00.607 06:18:20 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:00.607 06:18:20 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:00.607 06:18:20 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:08:00.607 06:18:20 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.607 06:18:20 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.607 06:18:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.607 06:18:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.607 06:18:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.607 06:18:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.607 06:18:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.607 06:18:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.607 06:18:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.607 06:18:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.607 06:18:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.607 06:18:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.607 06:18:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.607 06:18:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:00.607 06:18:20 json_config -- scripts/common.sh@345 -- # : 1 00:08:00.607 06:18:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.607 06:18:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.607 06:18:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:00.607 06:18:20 json_config -- scripts/common.sh@353 -- # local d=1 00:08:00.607 06:18:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.607 06:18:20 json_config -- scripts/common.sh@355 -- # echo 1 00:08:00.607 06:18:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.607 06:18:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:00.607 06:18:20 json_config -- scripts/common.sh@353 -- # local d=2 00:08:00.607 06:18:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.607 06:18:20 json_config -- scripts/common.sh@355 -- # echo 2 00:08:00.607 06:18:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.607 06:18:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.607 06:18:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.607 06:18:20 json_config -- scripts/common.sh@368 -- # return 0 00:08:00.607 06:18:20 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.607 06:18:20 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.607 --rc genhtml_branch_coverage=1 00:08:00.607 --rc genhtml_function_coverage=1 00:08:00.607 --rc genhtml_legend=1 00:08:00.607 --rc geninfo_all_blocks=1 00:08:00.607 --rc geninfo_unexecuted_blocks=1 00:08:00.607 00:08:00.607 ' 00:08:00.607 06:18:20 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.607 --rc genhtml_branch_coverage=1 00:08:00.607 --rc genhtml_function_coverage=1 00:08:00.607 --rc genhtml_legend=1 00:08:00.607 --rc geninfo_all_blocks=1 00:08:00.607 --rc geninfo_unexecuted_blocks=1 00:08:00.607 00:08:00.607 ' 00:08:00.607 06:18:20 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.607 --rc genhtml_branch_coverage=1 00:08:00.607 --rc genhtml_function_coverage=1 00:08:00.607 --rc genhtml_legend=1 00:08:00.607 --rc geninfo_all_blocks=1 00:08:00.607 --rc geninfo_unexecuted_blocks=1 00:08:00.607 00:08:00.607 ' 00:08:00.607 06:18:20 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.607 --rc genhtml_branch_coverage=1 00:08:00.607 --rc genhtml_function_coverage=1 00:08:00.607 --rc genhtml_legend=1 00:08:00.607 --rc geninfo_all_blocks=1 00:08:00.607 --rc geninfo_unexecuted_blocks=1 00:08:00.607 00:08:00.607 ' 00:08:00.607 06:18:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.607 06:18:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.607 06:18:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.607 06:18:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.607 06:18:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.607 06:18:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.607 06:18:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.607 06:18:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.607 06:18:20 json_config -- paths/export.sh@5 -- # export PATH 00:08:00.607 06:18:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@51 -- # : 0 00:08:00.607 06:18:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.608 06:18:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.608 06:18:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.608 06:18:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.608 06:18:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.608 06:18:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.608 06:18:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.608 06:18:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.608 06:18:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:00.869 INFO: JSON configuration test init 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:00.869 06:18:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:00.869 06:18:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:00.869 06:18:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:00.869 06:18:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.869 06:18:20 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:00.869 06:18:20 json_config -- json_config/common.sh@9 -- # local app=target 00:08:00.869 06:18:20 json_config -- json_config/common.sh@10 -- # shift 00:08:00.869 06:18:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:00.869 06:18:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:00.869 06:18:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:00.869 06:18:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:00.869 06:18:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:00.869 06:18:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2599398 00:08:00.869 06:18:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:00.869 Waiting for target to run... 00:08:00.869 06:18:20 json_config -- json_config/common.sh@25 -- # waitforlisten 2599398 /var/tmp/spdk_tgt.sock 00:08:00.869 06:18:20 json_config -- common/autotest_common.sh@833 -- # '[' -z 2599398 ']' 00:08:00.869 06:18:20 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:00.869 06:18:20 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.869 06:18:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:00.869 06:18:20 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:00.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:00.869 06:18:20 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.869 06:18:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.869 [2024-11-20 06:18:20.960360] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:00.869 [2024-11-20 06:18:20.960410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599398 ] 00:08:01.130 [2024-11-20 06:18:21.269831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.130 [2024-11-20 06:18:21.300145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.700 06:18:21 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.700 06:18:21 json_config -- common/autotest_common.sh@866 -- # return 0 00:08:01.700 06:18:21 json_config -- json_config/common.sh@26 -- # echo '' 00:08:01.700 00:08:01.700 06:18:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:01.700 06:18:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:01.700 06:18:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:01.700 06:18:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.700 06:18:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:01.700 06:18:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:01.700 06:18:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:01.700 06:18:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.700 06:18:21 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:01.700 06:18:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:01.700 06:18:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:02.272 06:18:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.272 06:18:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:02.272 06:18:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@54 -- # sort 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:02.272 06:18:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:02.272 06:18:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.272 06:18:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:02.533 06:18:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.533 06:18:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:02.533 06:18:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:02.533 MallocForNvmf0 00:08:02.533 06:18:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:02.533 06:18:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:02.792 MallocForNvmf1 00:08:02.792 06:18:22 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:02.792 06:18:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:03.052 [2024-11-20 06:18:23.109259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.052 06:18:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.052 06:18:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.052 06:18:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:03.052 06:18:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:03.312 06:18:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:03.312 06:18:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:03.572 06:18:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:03.572 06:18:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:03.572 [2024-11-20 06:18:23.811394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:03.572 06:18:23 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:03.572 06:18:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.572 06:18:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:03.833 06:18:23 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:03.833 06:18:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.833 06:18:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:03.833 06:18:23 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:03.833 06:18:23 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:03.833 06:18:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:03.833 MallocBdevForConfigChangeCheck 00:08:03.833 06:18:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:03.833 06:18:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.833 06:18:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:04.093 06:18:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:04.093 06:18:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:04.353 06:18:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:04.353 INFO: shutting down applications... 00:08:04.353 06:18:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:04.353 06:18:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:04.353 06:18:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:04.353 06:18:24 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:04.613 Calling clear_iscsi_subsystem 00:08:04.613 Calling clear_nvmf_subsystem 00:08:04.613 Calling clear_nbd_subsystem 00:08:04.613 Calling clear_ublk_subsystem 00:08:04.613 Calling clear_vhost_blk_subsystem 00:08:04.613 Calling clear_vhost_scsi_subsystem 00:08:04.613 Calling clear_bdev_subsystem 00:08:04.613 06:18:24 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:04.613 06:18:24 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:04.613 06:18:24 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:04.613 06:18:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:04.613 06:18:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:04.613 06:18:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:05.184 06:18:25 json_config -- json_config/json_config.sh@352 -- # break 00:08:05.184 06:18:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:05.184 06:18:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:05.184 06:18:25 json_config -- json_config/common.sh@31 -- # local app=target 00:08:05.184 06:18:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:05.184 06:18:25 json_config -- json_config/common.sh@35 -- # [[ -n 2599398 ]] 00:08:05.184 06:18:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2599398 00:08:05.184 06:18:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:05.184 06:18:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:05.184 06:18:25 json_config -- json_config/common.sh@41 -- # kill -0 2599398 00:08:05.184 06:18:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:05.755 06:18:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:05.755 06:18:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:05.755 06:18:25 json_config -- json_config/common.sh@41 -- # kill -0 2599398 00:08:05.755 06:18:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:05.755 06:18:25 json_config -- json_config/common.sh@43 -- # break 00:08:05.755 06:18:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:05.755 06:18:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:05.755 SPDK target shutdown done 00:08:05.755 06:18:25 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:05.755 INFO: relaunching applications... 00:08:05.755 06:18:25 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:05.755 06:18:25 json_config -- json_config/common.sh@9 -- # local app=target 00:08:05.755 06:18:25 json_config -- json_config/common.sh@10 -- # shift 00:08:05.755 06:18:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:05.755 06:18:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:05.755 06:18:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:05.755 06:18:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:05.755 06:18:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:05.755 06:18:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2600535 00:08:05.755 06:18:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:05.755 Waiting for target to run... 00:08:05.755 06:18:25 json_config -- json_config/common.sh@25 -- # waitforlisten 2600535 /var/tmp/spdk_tgt.sock 00:08:05.755 06:18:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:05.755 06:18:25 json_config -- common/autotest_common.sh@833 -- # '[' -z 2600535 ']' 00:08:05.755 06:18:25 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:05.755 06:18:25 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:05.755 06:18:25 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:05.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:05.755 06:18:25 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:05.755 06:18:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:05.755 [2024-11-20 06:18:25.850782] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:05.755 [2024-11-20 06:18:25.850839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600535 ] 00:08:06.016 [2024-11-20 06:18:26.148960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.016 [2024-11-20 06:18:26.174097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.587 [2024-11-20 06:18:26.676112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.587 [2024-11-20 06:18:26.708527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:06.587 06:18:26 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:06.587 06:18:26 json_config -- common/autotest_common.sh@866 -- # return 0 00:08:06.587 06:18:26 json_config -- json_config/common.sh@26 -- # echo '' 00:08:06.587 00:08:06.587 06:18:26 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:06.587 06:18:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:06.587 INFO: Checking if target configuration is the same... 00:08:06.587 06:18:26 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:06.587 06:18:26 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:06.587 06:18:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:06.587 + '[' 2 -ne 2 ']' 00:08:06.587 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:06.587 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:06.587 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:06.587 +++ basename /dev/fd/62 00:08:06.587 ++ mktemp /tmp/62.XXX 00:08:06.587 + tmp_file_1=/tmp/62.fvw 00:08:06.587 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:06.587 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:06.587 + tmp_file_2=/tmp/spdk_tgt_config.json.Ss0 00:08:06.587 + ret=0 00:08:06.587 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:06.847 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:07.107 + diff -u /tmp/62.fvw /tmp/spdk_tgt_config.json.Ss0 00:08:07.107 + echo 'INFO: JSON config files are the same' 00:08:07.107 INFO: JSON config files are the same 00:08:07.107 + rm /tmp/62.fvw /tmp/spdk_tgt_config.json.Ss0 00:08:07.107 + exit 0 00:08:07.107 06:18:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:07.107 06:18:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:07.107 INFO: changing configuration and checking if this can be detected... 00:08:07.107 06:18:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:07.107 06:18:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:07.107 06:18:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:07.107 06:18:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:07.107 06:18:27 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:07.107 + '[' 2 -ne 2 ']' 00:08:07.107 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:07.107 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:07.107 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:07.107 +++ basename /dev/fd/62 00:08:07.107 ++ mktemp /tmp/62.XXX 00:08:07.107 + tmp_file_1=/tmp/62.goS 00:08:07.107 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:07.107 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:07.107 + tmp_file_2=/tmp/spdk_tgt_config.json.Egf 00:08:07.107 + ret=0 00:08:07.107 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:07.679 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:07.679 + diff -u /tmp/62.goS /tmp/spdk_tgt_config.json.Egf 00:08:07.679 + ret=1 00:08:07.679 + echo '=== Start of file: /tmp/62.goS ===' 00:08:07.679 + cat /tmp/62.goS 00:08:07.679 + echo '=== End of file: /tmp/62.goS ===' 00:08:07.679 + echo '' 00:08:07.679 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Egf ===' 00:08:07.679 + cat /tmp/spdk_tgt_config.json.Egf 00:08:07.679 + echo '=== End of file: /tmp/spdk_tgt_config.json.Egf ===' 00:08:07.679 + echo '' 00:08:07.679 + rm /tmp/62.goS /tmp/spdk_tgt_config.json.Egf 00:08:07.679 + exit 1 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:07.679 INFO: configuration change detected. 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@324 -- # [[ -n 2600535 ]] 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:07.679 06:18:27 json_config -- json_config/json_config.sh@330 -- # killprocess 2600535 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@952 -- # '[' -z 2600535 ']' 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@956 -- # kill -0 2600535 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@957 -- # uname 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2600535 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2600535' 00:08:07.679 killing process with pid 2600535 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@971 -- # kill 2600535 00:08:07.679 06:18:27 json_config -- common/autotest_common.sh@976 -- # wait 2600535 00:08:07.941 06:18:28 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:07.941 06:18:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:07.941 06:18:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.941 06:18:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:07.941 06:18:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:07.941 06:18:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:07.941 INFO: Success 00:08:07.941 00:08:07.941 real 0m7.463s 00:08:07.941 user 0m9.114s 00:08:07.941 sys 0m1.967s 00:08:07.941 06:18:28 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.941 06:18:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:07.941 ************************************ 00:08:07.941 END TEST json_config 00:08:07.941 ************************************ 00:08:07.941 06:18:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:07.941 06:18:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.941 06:18:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.941 06:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:08.203 ************************************ 00:08:08.203 START TEST json_config_extra_key 00:08:08.203 ************************************ 00:08:08.203 06:18:28 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:08.203 06:18:28 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:08.204 06:18:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:08:08.204 06:18:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:08.204 06:18:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:08.204 06:18:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.204 06:18:28 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:08.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.204 --rc genhtml_branch_coverage=1 00:08:08.204 --rc genhtml_function_coverage=1 00:08:08.204 --rc genhtml_legend=1 00:08:08.204 --rc geninfo_all_blocks=1 00:08:08.204 --rc geninfo_unexecuted_blocks=1 00:08:08.204 00:08:08.204 ' 00:08:08.204 06:18:28 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:08.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.204 --rc genhtml_branch_coverage=1 00:08:08.204 --rc genhtml_function_coverage=1 00:08:08.204 --rc genhtml_legend=1 00:08:08.204 --rc geninfo_all_blocks=1 00:08:08.204 --rc geninfo_unexecuted_blocks=1 00:08:08.204 00:08:08.204 ' 00:08:08.204 06:18:28 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:08.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.204 --rc genhtml_branch_coverage=1 00:08:08.204 --rc genhtml_function_coverage=1 00:08:08.204 --rc genhtml_legend=1 00:08:08.204 --rc geninfo_all_blocks=1 00:08:08.204 --rc geninfo_unexecuted_blocks=1 00:08:08.204 00:08:08.204 ' 00:08:08.204 06:18:28 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:08.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.204 --rc genhtml_branch_coverage=1 00:08:08.204 --rc genhtml_function_coverage=1 00:08:08.204 --rc genhtml_legend=1 00:08:08.204 --rc geninfo_all_blocks=1 00:08:08.204 --rc geninfo_unexecuted_blocks=1 00:08:08.204 00:08:08.204 ' 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.204 06:18:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.204 06:18:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.204 06:18:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.204 06:18:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.204 06:18:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:08.204 06:18:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.204 06:18:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:08.204 INFO: launching applications... 00:08:08.204 06:18:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:08.204 06:18:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:08.204 06:18:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:08.204 06:18:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:08.204 06:18:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:08.205 06:18:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:08.205 06:18:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:08.205 06:18:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:08.205 06:18:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2601310 00:08:08.205 06:18:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:08.205 Waiting for target to run... 00:08:08.205 06:18:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2601310 /var/tmp/spdk_tgt.sock 00:08:08.205 06:18:28 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2601310 ']' 00:08:08.205 06:18:28 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:08.205 06:18:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:08.205 06:18:28 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:08.205 06:18:28 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:08.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:08.205 06:18:28 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:08.205 06:18:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 [2024-11-20 06:18:28.489241] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:08.467 [2024-11-20 06:18:28.489318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601310 ] 00:08:08.727 [2024-11-20 06:18:28.814068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.727 [2024-11-20 06:18:28.837721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.297 06:18:29 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:09.297 06:18:29 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:08:09.297 06:18:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:09.297 00:08:09.297 06:18:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:09.297 INFO: shutting down applications... 00:08:09.297 06:18:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:09.297 06:18:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:09.297 06:18:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:09.297 06:18:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2601310 ]] 00:08:09.297 06:18:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2601310 00:08:09.297 06:18:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:09.297 06:18:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:09.297 06:18:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2601310 00:08:09.297 06:18:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:09.558 06:18:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:09.558 06:18:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:09.558 06:18:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2601310 00:08:09.558 06:18:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:09.558 06:18:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:09.558 06:18:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:09.558 06:18:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:09.558 SPDK target shutdown done 00:08:09.558 06:18:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:09.558 Success 00:08:09.558 00:08:09.558 real 0m1.575s 00:08:09.558 user 0m1.160s 00:08:09.558 sys 0m0.451s 00:08:09.558 06:18:29 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.558 06:18:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:09.558 ************************************ 00:08:09.558 END TEST json_config_extra_key 00:08:09.558 ************************************ 00:08:09.820 06:18:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:09.820 06:18:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:09.820 06:18:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.820 06:18:29 -- common/autotest_common.sh@10 -- # set +x 00:08:09.820 ************************************ 00:08:09.820 START TEST alias_rpc 00:08:09.820 ************************************ 00:08:09.820 06:18:29 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:09.820 * Looking for test storage... 00:08:09.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:09.820 06:18:29 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:09.820 06:18:29 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:09.820 06:18:29 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:09.820 06:18:30 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.820 06:18:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:09.820 06:18:30 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.820 06:18:30 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:09.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.820 --rc genhtml_branch_coverage=1 00:08:09.820 --rc genhtml_function_coverage=1 00:08:09.820 --rc genhtml_legend=1 00:08:09.820 --rc geninfo_all_blocks=1 00:08:09.820 --rc geninfo_unexecuted_blocks=1 00:08:09.820 00:08:09.820 ' 00:08:09.820 06:18:30 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:09.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.820 --rc genhtml_branch_coverage=1 00:08:09.820 --rc genhtml_function_coverage=1 00:08:09.820 --rc genhtml_legend=1 00:08:09.820 --rc geninfo_all_blocks=1 00:08:09.820 --rc geninfo_unexecuted_blocks=1 00:08:09.820 00:08:09.820 ' 00:08:09.820 06:18:30 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:09.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.820 --rc genhtml_branch_coverage=1 00:08:09.820 --rc genhtml_function_coverage=1 00:08:09.820 --rc genhtml_legend=1 00:08:09.820 --rc geninfo_all_blocks=1 00:08:09.820 --rc geninfo_unexecuted_blocks=1 00:08:09.820 00:08:09.820 ' 00:08:09.820 06:18:30 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:09.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.820 --rc genhtml_branch_coverage=1 00:08:09.820 --rc genhtml_function_coverage=1 00:08:09.820 --rc genhtml_legend=1 00:08:09.820 --rc geninfo_all_blocks=1 00:08:09.820 --rc geninfo_unexecuted_blocks=1 00:08:09.820 00:08:09.820 ' 00:08:09.820 06:18:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:09.820 06:18:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2601673 00:08:09.820 06:18:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2601673 00:08:09.820 06:18:30 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2601673 ']' 00:08:09.820 06:18:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:09.820 06:18:30 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.820 06:18:30 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.821 06:18:30 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.821 06:18:30 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.821 06:18:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.083 [2024-11-20 06:18:30.148072] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:10.083 [2024-11-20 06:18:30.148156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601673 ] 00:08:10.083 [2024-11-20 06:18:30.236760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.083 [2024-11-20 06:18:30.277389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.026 06:18:30 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:11.026 06:18:30 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:11.026 06:18:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:11.026 06:18:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2601673 00:08:11.026 06:18:31 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2601673 ']' 00:08:11.026 06:18:31 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2601673 00:08:11.026 06:18:31 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:08:11.026 06:18:31 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:11.026 06:18:31 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2601673 00:08:11.026 06:18:31 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:11.026 06:18:31 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:11.026 06:18:31 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2601673' 00:08:11.026 killing process with pid 2601673 00:08:11.026 06:18:31 alias_rpc -- common/autotest_common.sh@971 -- # kill 2601673 00:08:11.026 06:18:31 alias_rpc -- common/autotest_common.sh@976 -- # wait 2601673 00:08:11.287 00:08:11.287 real 0m1.520s 00:08:11.287 user 0m1.647s 00:08:11.287 sys 0m0.460s 00:08:11.287 06:18:31 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.287 06:18:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.287 ************************************ 00:08:11.287 END TEST alias_rpc 00:08:11.287 ************************************ 00:08:11.287 06:18:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:11.287 06:18:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:11.287 06:18:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:11.287 06:18:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.287 06:18:31 -- common/autotest_common.sh@10 -- # set +x 00:08:11.287 ************************************ 00:08:11.287 START TEST spdkcli_tcp 00:08:11.287 ************************************ 00:08:11.287 06:18:31 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:11.548 * Looking for test storage... 00:08:11.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:11.548 06:18:31 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:11.548 06:18:31 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:11.548 06:18:31 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:11.548 06:18:31 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.548 06:18:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:11.548 06:18:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.548 06:18:31 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:11.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.548 --rc genhtml_branch_coverage=1 00:08:11.548 --rc genhtml_function_coverage=1 00:08:11.548 --rc genhtml_legend=1 00:08:11.548 --rc geninfo_all_blocks=1 00:08:11.548 --rc geninfo_unexecuted_blocks=1 00:08:11.548 00:08:11.548 ' 00:08:11.548 06:18:31 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:11.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.549 --rc genhtml_branch_coverage=1 00:08:11.549 --rc genhtml_function_coverage=1 00:08:11.549 --rc genhtml_legend=1 00:08:11.549 --rc geninfo_all_blocks=1 00:08:11.549 --rc geninfo_unexecuted_blocks=1 00:08:11.549 00:08:11.549 ' 00:08:11.549 06:18:31 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:11.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.549 --rc genhtml_branch_coverage=1 00:08:11.549 --rc genhtml_function_coverage=1 00:08:11.549 --rc genhtml_legend=1 00:08:11.549 --rc geninfo_all_blocks=1 00:08:11.549 --rc geninfo_unexecuted_blocks=1 00:08:11.549 00:08:11.549 ' 00:08:11.549 06:18:31 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:11.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.549 --rc genhtml_branch_coverage=1 00:08:11.549 --rc genhtml_function_coverage=1 00:08:11.549 --rc genhtml_legend=1 00:08:11.549 --rc geninfo_all_blocks=1 00:08:11.549 --rc geninfo_unexecuted_blocks=1 00:08:11.549 00:08:11.549 ' 00:08:11.549 06:18:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:11.549 06:18:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:11.549 06:18:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:11.549 06:18:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:11.549 06:18:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:11.549 06:18:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:11.549 06:18:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:11.549 06:18:31 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:11.549 06:18:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.549 06:18:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2602009 00:08:11.549 06:18:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2602009 00:08:11.549 06:18:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:11.549 06:18:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2602009 ']' 00:08:11.549 06:18:31 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.549 06:18:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:11.549 06:18:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.549 06:18:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:11.549 06:18:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.549 [2024-11-20 06:18:31.751409] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:11.549 [2024-11-20 06:18:31.751481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602009 ] 00:08:11.809 [2024-11-20 06:18:31.838915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:11.809 [2024-11-20 06:18:31.875321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.809 [2024-11-20 06:18:31.875336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.379 06:18:32 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:12.379 06:18:32 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:08:12.379 06:18:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:12.379 06:18:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2602145 00:08:12.379 06:18:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:12.639 [ 00:08:12.639 "bdev_malloc_delete", 00:08:12.639 "bdev_malloc_create", 00:08:12.639 "bdev_null_resize", 00:08:12.639 "bdev_null_delete", 00:08:12.639 "bdev_null_create", 00:08:12.639 "bdev_nvme_cuse_unregister", 00:08:12.639 "bdev_nvme_cuse_register", 00:08:12.639 "bdev_opal_new_user", 00:08:12.639 "bdev_opal_set_lock_state", 00:08:12.639 "bdev_opal_delete", 00:08:12.639 "bdev_opal_get_info", 00:08:12.639 "bdev_opal_create", 00:08:12.639 "bdev_nvme_opal_revert", 00:08:12.639 "bdev_nvme_opal_init", 00:08:12.639 "bdev_nvme_send_cmd", 00:08:12.639 "bdev_nvme_set_keys", 00:08:12.639 "bdev_nvme_get_path_iostat", 00:08:12.639 "bdev_nvme_get_mdns_discovery_info", 00:08:12.639 "bdev_nvme_stop_mdns_discovery", 00:08:12.639 "bdev_nvme_start_mdns_discovery", 00:08:12.639 "bdev_nvme_set_multipath_policy", 00:08:12.639 "bdev_nvme_set_preferred_path", 00:08:12.639 "bdev_nvme_get_io_paths", 00:08:12.639 "bdev_nvme_remove_error_injection", 00:08:12.639 "bdev_nvme_add_error_injection", 00:08:12.639 "bdev_nvme_get_discovery_info", 00:08:12.639 "bdev_nvme_stop_discovery", 00:08:12.639 "bdev_nvme_start_discovery", 00:08:12.639 "bdev_nvme_get_controller_health_info", 00:08:12.639 "bdev_nvme_disable_controller", 00:08:12.640 "bdev_nvme_enable_controller", 00:08:12.640 "bdev_nvme_reset_controller", 00:08:12.640 "bdev_nvme_get_transport_statistics", 00:08:12.640 "bdev_nvme_apply_firmware", 00:08:12.640 "bdev_nvme_detach_controller", 00:08:12.640 "bdev_nvme_get_controllers", 00:08:12.640 "bdev_nvme_attach_controller", 00:08:12.640 "bdev_nvme_set_hotplug", 00:08:12.640 "bdev_nvme_set_options", 00:08:12.640 "bdev_passthru_delete", 00:08:12.640 "bdev_passthru_create", 00:08:12.640 "bdev_lvol_set_parent_bdev", 00:08:12.640 "bdev_lvol_set_parent", 00:08:12.640 "bdev_lvol_check_shallow_copy", 00:08:12.640 "bdev_lvol_start_shallow_copy", 00:08:12.640 "bdev_lvol_grow_lvstore", 00:08:12.640 "bdev_lvol_get_lvols", 00:08:12.640 "bdev_lvol_get_lvstores", 00:08:12.640 "bdev_lvol_delete", 00:08:12.640 "bdev_lvol_set_read_only", 00:08:12.640 "bdev_lvol_resize", 00:08:12.640 "bdev_lvol_decouple_parent", 00:08:12.640 "bdev_lvol_inflate", 00:08:12.640 "bdev_lvol_rename", 00:08:12.640 "bdev_lvol_clone_bdev", 00:08:12.640 "bdev_lvol_clone", 00:08:12.640 "bdev_lvol_snapshot", 00:08:12.640 "bdev_lvol_create", 00:08:12.640 "bdev_lvol_delete_lvstore", 00:08:12.640 "bdev_lvol_rename_lvstore", 00:08:12.640 "bdev_lvol_create_lvstore", 00:08:12.640 "bdev_raid_set_options", 00:08:12.640 "bdev_raid_remove_base_bdev", 00:08:12.640 "bdev_raid_add_base_bdev", 00:08:12.640 "bdev_raid_delete", 00:08:12.640 "bdev_raid_create", 00:08:12.640 "bdev_raid_get_bdevs", 00:08:12.640 "bdev_error_inject_error", 00:08:12.640 "bdev_error_delete", 00:08:12.640 "bdev_error_create", 00:08:12.640 "bdev_split_delete", 00:08:12.640 "bdev_split_create", 00:08:12.640 "bdev_delay_delete", 00:08:12.640 "bdev_delay_create", 00:08:12.640 "bdev_delay_update_latency", 00:08:12.640 "bdev_zone_block_delete", 00:08:12.640 "bdev_zone_block_create", 00:08:12.640 "blobfs_create", 00:08:12.640 "blobfs_detect", 00:08:12.640 "blobfs_set_cache_size", 00:08:12.640 "bdev_aio_delete", 00:08:12.640 "bdev_aio_rescan", 00:08:12.640 "bdev_aio_create", 00:08:12.640 "bdev_ftl_set_property", 00:08:12.640 "bdev_ftl_get_properties", 00:08:12.640 "bdev_ftl_get_stats", 00:08:12.640 "bdev_ftl_unmap", 00:08:12.640 "bdev_ftl_unload", 00:08:12.640 "bdev_ftl_delete", 00:08:12.640 "bdev_ftl_load", 00:08:12.640 "bdev_ftl_create", 00:08:12.640 "bdev_virtio_attach_controller", 00:08:12.640 "bdev_virtio_scsi_get_devices", 00:08:12.640 "bdev_virtio_detach_controller", 00:08:12.640 "bdev_virtio_blk_set_hotplug", 00:08:12.640 "bdev_iscsi_delete", 00:08:12.640 "bdev_iscsi_create", 00:08:12.640 "bdev_iscsi_set_options", 00:08:12.640 "accel_error_inject_error", 00:08:12.640 "ioat_scan_accel_module", 00:08:12.640 "dsa_scan_accel_module", 00:08:12.640 "iaa_scan_accel_module", 00:08:12.640 "vfu_virtio_create_fs_endpoint", 00:08:12.640 "vfu_virtio_create_scsi_endpoint", 00:08:12.640 "vfu_virtio_scsi_remove_target", 00:08:12.640 "vfu_virtio_scsi_add_target", 00:08:12.640 "vfu_virtio_create_blk_endpoint", 00:08:12.640 "vfu_virtio_delete_endpoint", 00:08:12.640 "keyring_file_remove_key", 00:08:12.640 "keyring_file_add_key", 00:08:12.640 "keyring_linux_set_options", 00:08:12.640 "fsdev_aio_delete", 00:08:12.640 "fsdev_aio_create", 00:08:12.640 "iscsi_get_histogram", 00:08:12.640 "iscsi_enable_histogram", 00:08:12.640 "iscsi_set_options", 00:08:12.640 "iscsi_get_auth_groups", 00:08:12.640 "iscsi_auth_group_remove_secret", 00:08:12.640 "iscsi_auth_group_add_secret", 00:08:12.640 "iscsi_delete_auth_group", 00:08:12.640 "iscsi_create_auth_group", 00:08:12.640 "iscsi_set_discovery_auth", 00:08:12.640 "iscsi_get_options", 00:08:12.640 "iscsi_target_node_request_logout", 00:08:12.640 "iscsi_target_node_set_redirect", 00:08:12.640 "iscsi_target_node_set_auth", 00:08:12.640 "iscsi_target_node_add_lun", 00:08:12.640 "iscsi_get_stats", 00:08:12.640 "iscsi_get_connections", 00:08:12.640 "iscsi_portal_group_set_auth", 00:08:12.640 "iscsi_start_portal_group", 00:08:12.640 "iscsi_delete_portal_group", 00:08:12.640 "iscsi_create_portal_group", 00:08:12.640 "iscsi_get_portal_groups", 00:08:12.640 "iscsi_delete_target_node", 00:08:12.640 "iscsi_target_node_remove_pg_ig_maps", 00:08:12.640 "iscsi_target_node_add_pg_ig_maps", 00:08:12.640 "iscsi_create_target_node", 00:08:12.640 "iscsi_get_target_nodes", 00:08:12.640 "iscsi_delete_initiator_group", 00:08:12.640 "iscsi_initiator_group_remove_initiators", 00:08:12.640 "iscsi_initiator_group_add_initiators", 00:08:12.640 "iscsi_create_initiator_group", 00:08:12.640 "iscsi_get_initiator_groups", 00:08:12.640 "nvmf_set_crdt", 00:08:12.640 "nvmf_set_config", 00:08:12.640 "nvmf_set_max_subsystems", 00:08:12.640 "nvmf_stop_mdns_prr", 00:08:12.640 "nvmf_publish_mdns_prr", 00:08:12.640 "nvmf_subsystem_get_listeners", 00:08:12.640 "nvmf_subsystem_get_qpairs", 00:08:12.640 "nvmf_subsystem_get_controllers", 00:08:12.640 "nvmf_get_stats", 00:08:12.640 "nvmf_get_transports", 00:08:12.640 "nvmf_create_transport", 00:08:12.640 "nvmf_get_targets", 00:08:12.640 "nvmf_delete_target", 00:08:12.640 "nvmf_create_target", 00:08:12.640 "nvmf_subsystem_allow_any_host", 00:08:12.640 "nvmf_subsystem_set_keys", 00:08:12.640 "nvmf_subsystem_remove_host", 00:08:12.640 "nvmf_subsystem_add_host", 00:08:12.640 "nvmf_ns_remove_host", 00:08:12.640 "nvmf_ns_add_host", 00:08:12.640 "nvmf_subsystem_remove_ns", 00:08:12.640 "nvmf_subsystem_set_ns_ana_group", 00:08:12.640 "nvmf_subsystem_add_ns", 00:08:12.640 "nvmf_subsystem_listener_set_ana_state", 00:08:12.640 "nvmf_discovery_get_referrals", 00:08:12.640 "nvmf_discovery_remove_referral", 00:08:12.640 "nvmf_discovery_add_referral", 00:08:12.640 "nvmf_subsystem_remove_listener", 00:08:12.640 "nvmf_subsystem_add_listener", 00:08:12.640 "nvmf_delete_subsystem", 00:08:12.640 "nvmf_create_subsystem", 00:08:12.640 "nvmf_get_subsystems", 00:08:12.640 "env_dpdk_get_mem_stats", 00:08:12.640 "nbd_get_disks", 00:08:12.640 "nbd_stop_disk", 00:08:12.640 "nbd_start_disk", 00:08:12.640 "ublk_recover_disk", 00:08:12.640 "ublk_get_disks", 00:08:12.640 "ublk_stop_disk", 00:08:12.640 "ublk_start_disk", 00:08:12.640 "ublk_destroy_target", 00:08:12.640 "ublk_create_target", 00:08:12.640 "virtio_blk_create_transport", 00:08:12.640 "virtio_blk_get_transports", 00:08:12.640 "vhost_controller_set_coalescing", 00:08:12.640 "vhost_get_controllers", 00:08:12.640 "vhost_delete_controller", 00:08:12.640 "vhost_create_blk_controller", 00:08:12.640 "vhost_scsi_controller_remove_target", 00:08:12.640 "vhost_scsi_controller_add_target", 00:08:12.640 "vhost_start_scsi_controller", 00:08:12.640 "vhost_create_scsi_controller", 00:08:12.640 "thread_set_cpumask", 00:08:12.640 "scheduler_set_options", 00:08:12.640 "framework_get_governor", 00:08:12.640 "framework_get_scheduler", 00:08:12.640 "framework_set_scheduler", 00:08:12.640 "framework_get_reactors", 00:08:12.640 "thread_get_io_channels", 00:08:12.640 "thread_get_pollers", 00:08:12.640 "thread_get_stats", 00:08:12.640 "framework_monitor_context_switch", 00:08:12.640 "spdk_kill_instance", 00:08:12.640 "log_enable_timestamps", 00:08:12.640 "log_get_flags", 00:08:12.640 "log_clear_flag", 00:08:12.640 "log_set_flag", 00:08:12.640 "log_get_level", 00:08:12.640 "log_set_level", 00:08:12.640 "log_get_print_level", 00:08:12.640 "log_set_print_level", 00:08:12.640 "framework_enable_cpumask_locks", 00:08:12.640 "framework_disable_cpumask_locks", 00:08:12.640 "framework_wait_init", 00:08:12.640 "framework_start_init", 00:08:12.640 "scsi_get_devices", 00:08:12.640 "bdev_get_histogram", 00:08:12.640 "bdev_enable_histogram", 00:08:12.640 "bdev_set_qos_limit", 00:08:12.640 "bdev_set_qd_sampling_period", 00:08:12.640 "bdev_get_bdevs", 00:08:12.640 "bdev_reset_iostat", 00:08:12.640 "bdev_get_iostat", 00:08:12.640 "bdev_examine", 00:08:12.640 "bdev_wait_for_examine", 00:08:12.640 "bdev_set_options", 00:08:12.640 "accel_get_stats", 00:08:12.640 "accel_set_options", 00:08:12.640 "accel_set_driver", 00:08:12.640 "accel_crypto_key_destroy", 00:08:12.640 "accel_crypto_keys_get", 00:08:12.640 "accel_crypto_key_create", 00:08:12.640 "accel_assign_opc", 00:08:12.640 "accel_get_module_info", 00:08:12.640 "accel_get_opc_assignments", 00:08:12.640 "vmd_rescan", 00:08:12.640 "vmd_remove_device", 00:08:12.640 "vmd_enable", 00:08:12.640 "sock_get_default_impl", 00:08:12.640 "sock_set_default_impl", 00:08:12.640 "sock_impl_set_options", 00:08:12.640 "sock_impl_get_options", 00:08:12.640 "iobuf_get_stats", 00:08:12.640 "iobuf_set_options", 00:08:12.640 "keyring_get_keys", 00:08:12.640 "vfu_tgt_set_base_path", 00:08:12.640 "framework_get_pci_devices", 00:08:12.640 "framework_get_config", 00:08:12.640 "framework_get_subsystems", 00:08:12.640 "fsdev_set_opts", 00:08:12.640 "fsdev_get_opts", 00:08:12.640 "trace_get_info", 00:08:12.640 "trace_get_tpoint_group_mask", 00:08:12.640 "trace_disable_tpoint_group", 00:08:12.640 "trace_enable_tpoint_group", 00:08:12.640 "trace_clear_tpoint_mask", 00:08:12.640 "trace_set_tpoint_mask", 00:08:12.640 "notify_get_notifications", 00:08:12.640 "notify_get_types", 00:08:12.640 "spdk_get_version", 00:08:12.640 "rpc_get_methods" 00:08:12.640 ] 00:08:12.640 06:18:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.640 06:18:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:12.640 06:18:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2602009 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2602009 ']' 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2602009 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2602009 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2602009' 00:08:12.640 killing process with pid 2602009 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2602009 00:08:12.640 06:18:32 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2602009 00:08:12.942 00:08:12.942 real 0m1.537s 00:08:12.942 user 0m2.765s 00:08:12.942 sys 0m0.494s 00:08:12.942 06:18:33 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.942 06:18:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.942 ************************************ 00:08:12.942 END TEST spdkcli_tcp 00:08:12.942 ************************************ 00:08:12.942 06:18:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:12.942 06:18:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:12.942 06:18:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.942 06:18:33 -- common/autotest_common.sh@10 -- # set +x 00:08:12.942 ************************************ 00:08:12.942 START TEST dpdk_mem_utility 00:08:12.942 ************************************ 00:08:12.942 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:12.942 * Looking for test storage... 00:08:12.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:12.942 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:12.942 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:08:12.942 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.202 06:18:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.202 --rc genhtml_branch_coverage=1 00:08:13.202 --rc genhtml_function_coverage=1 00:08:13.202 --rc genhtml_legend=1 00:08:13.202 --rc geninfo_all_blocks=1 00:08:13.202 --rc geninfo_unexecuted_blocks=1 00:08:13.202 00:08:13.202 ' 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.202 --rc genhtml_branch_coverage=1 00:08:13.202 --rc genhtml_function_coverage=1 00:08:13.202 --rc genhtml_legend=1 00:08:13.202 --rc geninfo_all_blocks=1 00:08:13.202 --rc geninfo_unexecuted_blocks=1 00:08:13.202 00:08:13.202 ' 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.202 --rc genhtml_branch_coverage=1 00:08:13.202 --rc genhtml_function_coverage=1 00:08:13.202 --rc genhtml_legend=1 00:08:13.202 --rc geninfo_all_blocks=1 00:08:13.202 --rc geninfo_unexecuted_blocks=1 00:08:13.202 00:08:13.202 ' 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.202 --rc genhtml_branch_coverage=1 00:08:13.202 --rc genhtml_function_coverage=1 00:08:13.202 --rc genhtml_legend=1 00:08:13.202 --rc geninfo_all_blocks=1 00:08:13.202 --rc geninfo_unexecuted_blocks=1 00:08:13.202 00:08:13.202 ' 00:08:13.202 06:18:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:13.202 06:18:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2602375 00:08:13.202 06:18:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2602375 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2602375 ']' 00:08:13.202 06:18:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:13.202 06:18:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:13.202 [2024-11-20 06:18:33.356204] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:13.202 [2024-11-20 06:18:33.356285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602375 ] 00:08:13.202 [2024-11-20 06:18:33.443745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.202 [2024-11-20 06:18:33.478876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.144 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.144 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:08:14.144 06:18:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:14.144 06:18:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:14.144 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.144 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:14.144 { 00:08:14.144 "filename": "/tmp/spdk_mem_dump.txt" 00:08:14.145 } 00:08:14.145 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.145 06:18:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:14.145 DPDK memory size 818.000000 MiB in 1 heap(s) 00:08:14.145 1 heaps totaling size 818.000000 MiB 00:08:14.145 size: 818.000000 MiB heap id: 0 00:08:14.145 end heaps---------- 00:08:14.145 9 mempools totaling size 603.782043 MiB 00:08:14.145 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:14.145 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:14.145 size: 100.555481 MiB name: bdev_io_2602375 00:08:14.145 size: 50.003479 MiB name: msgpool_2602375 00:08:14.145 size: 36.509338 MiB name: fsdev_io_2602375 00:08:14.145 size: 21.763794 MiB name: PDU_Pool 00:08:14.145 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:14.145 size: 4.133484 MiB name: evtpool_2602375 00:08:14.145 size: 0.026123 MiB name: Session_Pool 00:08:14.145 end mempools------- 00:08:14.145 6 memzones totaling size 4.142822 MiB 00:08:14.145 size: 1.000366 MiB name: RG_ring_0_2602375 00:08:14.145 size: 1.000366 MiB name: RG_ring_1_2602375 00:08:14.145 size: 1.000366 MiB name: RG_ring_4_2602375 00:08:14.145 size: 1.000366 MiB name: RG_ring_5_2602375 00:08:14.145 size: 0.125366 MiB name: RG_ring_2_2602375 00:08:14.145 size: 0.015991 MiB name: RG_ring_3_2602375 00:08:14.145 end memzones------- 00:08:14.145 06:18:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:14.145 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:08:14.145 list of free elements. size: 10.852478 MiB 00:08:14.145 element at address: 0x200019200000 with size: 0.999878 MiB 00:08:14.145 element at address: 0x200019400000 with size: 0.999878 MiB 00:08:14.145 element at address: 0x200000400000 with size: 0.998535 MiB 00:08:14.145 element at address: 0x200032000000 with size: 0.994446 MiB 00:08:14.145 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:14.145 element at address: 0x200012c00000 with size: 0.944275 MiB 00:08:14.145 element at address: 0x200019600000 with size: 0.936584 MiB 00:08:14.145 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:14.145 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:08:14.145 element at address: 0x200000c00000 with size: 0.495422 MiB 00:08:14.145 element at address: 0x20000a600000 with size: 0.490723 MiB 00:08:14.145 element at address: 0x200019800000 with size: 0.485657 MiB 00:08:14.145 element at address: 0x200003e00000 with size: 0.481934 MiB 00:08:14.145 element at address: 0x200028200000 with size: 0.410034 MiB 00:08:14.145 element at address: 0x200000800000 with size: 0.355042 MiB 00:08:14.145 list of standard malloc elements. size: 199.218628 MiB 00:08:14.145 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:14.145 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:14.145 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:14.145 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:08:14.145 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:08:14.145 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:14.145 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:08:14.145 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:14.145 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:08:14.145 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20000085b040 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20000085f300 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:14.145 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:14.145 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:14.145 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:14.145 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:14.145 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:14.145 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:14.145 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:08:14.145 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:08:14.145 element at address: 0x200028268f80 with size: 0.000183 MiB 00:08:14.145 element at address: 0x200028269040 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:08:14.145 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:08:14.145 list of memzone associated elements. size: 607.928894 MiB 00:08:14.145 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:08:14.145 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:14.145 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:08:14.145 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:14.145 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:08:14.145 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2602375_0 00:08:14.145 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:14.145 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2602375_0 00:08:14.145 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:14.145 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2602375_0 00:08:14.145 element at address: 0x2000199be940 with size: 20.255554 MiB 00:08:14.145 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:14.145 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:08:14.145 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:14.145 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:14.145 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2602375_0 00:08:14.145 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:14.145 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2602375 00:08:14.145 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:14.145 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2602375 00:08:14.145 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:14.145 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:14.145 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:08:14.145 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:14.145 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:14.145 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:14.145 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:14.145 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:14.145 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:14.145 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2602375 00:08:14.145 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:14.145 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2602375 00:08:14.145 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:08:14.145 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2602375 00:08:14.145 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:08:14.145 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2602375 00:08:14.145 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:14.145 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2602375 00:08:14.145 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:14.145 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2602375 00:08:14.145 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:14.145 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:14.145 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:14.146 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:14.146 element at address: 0x20001987c540 with size: 0.250488 MiB 00:08:14.146 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:14.146 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:14.146 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2602375 00:08:14.146 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:08:14.146 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2602375 00:08:14.146 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:14.146 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:14.146 element at address: 0x200028269100 with size: 0.023743 MiB 00:08:14.146 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:14.146 element at address: 0x20000085b100 with size: 0.016113 MiB 00:08:14.146 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2602375 00:08:14.146 element at address: 0x20002826f240 with size: 0.002441 MiB 00:08:14.146 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:14.146 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:14.146 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2602375 00:08:14.146 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:14.146 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2602375 00:08:14.146 element at address: 0x20000085af00 with size: 0.000305 MiB 00:08:14.146 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2602375 00:08:14.146 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:08:14.146 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:14.146 06:18:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:14.146 06:18:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2602375 00:08:14.146 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2602375 ']' 00:08:14.146 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2602375 00:08:14.146 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:08:14.146 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:14.146 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2602375 00:08:14.146 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:14.146 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:14.146 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2602375' 00:08:14.146 killing process with pid 2602375 00:08:14.146 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2602375 00:08:14.146 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2602375 00:08:14.463 00:08:14.463 real 0m1.396s 00:08:14.463 user 0m1.468s 00:08:14.463 sys 0m0.416s 00:08:14.463 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.463 06:18:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:14.463 ************************************ 00:08:14.463 END TEST dpdk_mem_utility 00:08:14.464 ************************************ 00:08:14.464 06:18:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:14.464 06:18:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.464 06:18:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.464 06:18:34 -- common/autotest_common.sh@10 -- # set +x 00:08:14.464 ************************************ 00:08:14.464 START TEST event 00:08:14.464 ************************************ 00:08:14.464 06:18:34 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:14.464 * Looking for test storage... 00:08:14.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:14.464 06:18:34 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:14.464 06:18:34 event -- common/autotest_common.sh@1691 -- # lcov --version 00:08:14.464 06:18:34 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:14.754 06:18:34 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:14.754 06:18:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.754 06:18:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.754 06:18:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.754 06:18:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.754 06:18:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.754 06:18:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.754 06:18:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.754 06:18:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.754 06:18:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.754 06:18:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.754 06:18:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.754 06:18:34 event -- scripts/common.sh@344 -- # case "$op" in 00:08:14.754 06:18:34 event -- scripts/common.sh@345 -- # : 1 00:08:14.754 06:18:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.754 06:18:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.754 06:18:34 event -- scripts/common.sh@365 -- # decimal 1 00:08:14.754 06:18:34 event -- scripts/common.sh@353 -- # local d=1 00:08:14.754 06:18:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.754 06:18:34 event -- scripts/common.sh@355 -- # echo 1 00:08:14.754 06:18:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.754 06:18:34 event -- scripts/common.sh@366 -- # decimal 2 00:08:14.754 06:18:34 event -- scripts/common.sh@353 -- # local d=2 00:08:14.754 06:18:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.754 06:18:34 event -- scripts/common.sh@355 -- # echo 2 00:08:14.754 06:18:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.754 06:18:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.754 06:18:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.754 06:18:34 event -- scripts/common.sh@368 -- # return 0 00:08:14.754 06:18:34 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.754 06:18:34 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:14.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.754 --rc genhtml_branch_coverage=1 00:08:14.754 --rc genhtml_function_coverage=1 00:08:14.754 --rc genhtml_legend=1 00:08:14.754 --rc geninfo_all_blocks=1 00:08:14.754 --rc geninfo_unexecuted_blocks=1 00:08:14.754 00:08:14.754 ' 00:08:14.754 06:18:34 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:14.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.754 --rc genhtml_branch_coverage=1 00:08:14.754 --rc genhtml_function_coverage=1 00:08:14.754 --rc genhtml_legend=1 00:08:14.754 --rc geninfo_all_blocks=1 00:08:14.754 --rc geninfo_unexecuted_blocks=1 00:08:14.754 00:08:14.754 ' 00:08:14.754 06:18:34 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:14.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.754 --rc genhtml_branch_coverage=1 00:08:14.754 --rc genhtml_function_coverage=1 00:08:14.754 --rc genhtml_legend=1 00:08:14.754 --rc geninfo_all_blocks=1 00:08:14.754 --rc geninfo_unexecuted_blocks=1 00:08:14.754 00:08:14.754 ' 00:08:14.754 06:18:34 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:14.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.754 --rc genhtml_branch_coverage=1 00:08:14.754 --rc genhtml_function_coverage=1 00:08:14.754 --rc genhtml_legend=1 00:08:14.754 --rc geninfo_all_blocks=1 00:08:14.754 --rc geninfo_unexecuted_blocks=1 00:08:14.754 00:08:14.754 ' 00:08:14.754 06:18:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:14.754 06:18:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:14.754 06:18:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:14.754 06:18:34 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:14.754 06:18:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.754 06:18:34 event -- common/autotest_common.sh@10 -- # set +x 00:08:14.754 ************************************ 00:08:14.754 START TEST event_perf 00:08:14.754 ************************************ 00:08:14.754 06:18:34 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:14.754 Running I/O for 1 seconds...[2024-11-20 06:18:34.827740] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:14.754 [2024-11-20 06:18:34.827845] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602681 ] 00:08:14.754 [2024-11-20 06:18:34.921231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.754 [2024-11-20 06:18:34.959310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.754 [2024-11-20 06:18:34.959462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.754 [2024-11-20 06:18:34.959900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.754 [2024-11-20 06:18:34.959901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.722 Running I/O for 1 seconds... 00:08:15.722 lcore 0: 180946 00:08:15.722 lcore 1: 180949 00:08:15.722 lcore 2: 180948 00:08:15.722 lcore 3: 180944 00:08:15.722 done. 00:08:15.722 00:08:15.722 real 0m1.183s 00:08:15.722 user 0m4.087s 00:08:15.722 sys 0m0.094s 00:08:15.722 06:18:35 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.722 06:18:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:15.722 ************************************ 00:08:15.722 END TEST event_perf 00:08:15.722 ************************************ 00:08:15.982 06:18:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:15.982 06:18:36 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:15.982 06:18:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.982 06:18:36 event -- common/autotest_common.sh@10 -- # set +x 00:08:15.982 ************************************ 00:08:15.982 START TEST event_reactor 00:08:15.982 ************************************ 00:08:15.983 06:18:36 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:15.983 [2024-11-20 06:18:36.084895] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:15.983 [2024-11-20 06:18:36.084999] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602987 ] 00:08:15.983 [2024-11-20 06:18:36.173053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.983 [2024-11-20 06:18:36.211032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.380 test_start 00:08:17.380 oneshot 00:08:17.380 tick 100 00:08:17.380 tick 100 00:08:17.380 tick 250 00:08:17.380 tick 100 00:08:17.380 tick 100 00:08:17.380 tick 100 00:08:17.380 tick 250 00:08:17.380 tick 500 00:08:17.380 tick 100 00:08:17.380 tick 100 00:08:17.380 tick 250 00:08:17.380 tick 100 00:08:17.380 tick 100 00:08:17.380 test_end 00:08:17.380 00:08:17.380 real 0m1.173s 00:08:17.380 user 0m1.092s 00:08:17.380 sys 0m0.076s 00:08:17.380 06:18:37 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.380 06:18:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:17.380 ************************************ 00:08:17.380 END TEST event_reactor 00:08:17.380 ************************************ 00:08:17.380 06:18:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:17.380 06:18:37 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:17.380 06:18:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.380 06:18:37 event -- common/autotest_common.sh@10 -- # set +x 00:08:17.380 ************************************ 00:08:17.380 START TEST event_reactor_perf 00:08:17.380 ************************************ 00:08:17.380 06:18:37 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:17.380 [2024-11-20 06:18:37.333059] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:17.381 [2024-11-20 06:18:37.333172] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603335 ] 00:08:17.381 [2024-11-20 06:18:37.418308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.381 [2024-11-20 06:18:37.450912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.332 test_start 00:08:18.332 test_end 00:08:18.332 Performance: 537401 events per second 00:08:18.332 00:08:18.332 real 0m1.164s 00:08:18.332 user 0m1.089s 00:08:18.332 sys 0m0.072s 00:08:18.332 06:18:38 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:18.332 06:18:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:18.332 ************************************ 00:08:18.332 END TEST event_reactor_perf 00:08:18.332 ************************************ 00:08:18.332 06:18:38 event -- event/event.sh@49 -- # uname -s 00:08:18.332 06:18:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:18.332 06:18:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:18.332 06:18:38 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:18.332 06:18:38 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:18.332 06:18:38 event -- common/autotest_common.sh@10 -- # set +x 00:08:18.332 ************************************ 00:08:18.332 START TEST event_scheduler 00:08:18.332 ************************************ 00:08:18.332 06:18:38 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:18.594 * Looking for test storage... 00:08:18.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.594 06:18:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:18.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.594 --rc genhtml_branch_coverage=1 00:08:18.594 --rc genhtml_function_coverage=1 00:08:18.594 --rc genhtml_legend=1 00:08:18.594 --rc geninfo_all_blocks=1 00:08:18.594 --rc geninfo_unexecuted_blocks=1 00:08:18.594 00:08:18.594 ' 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:18.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.594 --rc genhtml_branch_coverage=1 00:08:18.594 --rc genhtml_function_coverage=1 00:08:18.594 --rc genhtml_legend=1 00:08:18.594 --rc geninfo_all_blocks=1 00:08:18.594 --rc geninfo_unexecuted_blocks=1 00:08:18.594 00:08:18.594 ' 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:18.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.594 --rc genhtml_branch_coverage=1 00:08:18.594 --rc genhtml_function_coverage=1 00:08:18.594 --rc genhtml_legend=1 00:08:18.594 --rc geninfo_all_blocks=1 00:08:18.594 --rc geninfo_unexecuted_blocks=1 00:08:18.594 00:08:18.594 ' 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:18.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.594 --rc genhtml_branch_coverage=1 00:08:18.594 --rc genhtml_function_coverage=1 00:08:18.594 --rc genhtml_legend=1 00:08:18.594 --rc geninfo_all_blocks=1 00:08:18.594 --rc geninfo_unexecuted_blocks=1 00:08:18.594 00:08:18.594 ' 00:08:18.594 06:18:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:18.594 06:18:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2603730 00:08:18.594 06:18:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:18.594 06:18:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2603730 00:08:18.594 06:18:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2603730 ']' 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:18.594 06:18:38 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.595 06:18:38 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:18.595 06:18:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:18.595 [2024-11-20 06:18:38.806552] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:18.595 [2024-11-20 06:18:38.806606] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603730 ] 00:08:18.856 [2024-11-20 06:18:38.895701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.856 [2024-11-20 06:18:38.934365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.856 [2024-11-20 06:18:38.934518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.856 [2024-11-20 06:18:38.934671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.856 [2024-11-20 06:18:38.934672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:08:19.428 06:18:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:19.428 [2024-11-20 06:18:39.612910] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:19.428 [2024-11-20 06:18:39.612929] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:19.428 [2024-11-20 06:18:39.612941] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:19.428 [2024-11-20 06:18:39.612947] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:19.428 [2024-11-20 06:18:39.612953] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.428 06:18:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:19.428 [2024-11-20 06:18:39.677095] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.428 06:18:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:19.428 06:18:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:19.690 ************************************ 00:08:19.690 START TEST scheduler_create_thread 00:08:19.690 ************************************ 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.690 2 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.690 3 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.690 4 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.690 5 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.690 6 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.690 7 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.690 8 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.690 9 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.690 06:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:20.262 10 00:08:20.262 06:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.262 06:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:20.262 06:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.262 06:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:21.648 06:18:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.648 06:18:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:21.648 06:18:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:21.648 06:18:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.648 06:18:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:22.221 06:18:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.221 06:18:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:22.221 06:18:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.221 06:18:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.163 06:18:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.163 06:18:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:23.163 06:18:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:23.163 06:18:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.163 06:18:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.734 06:18:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.734 00:08:23.734 real 0m4.223s 00:08:23.734 user 0m0.024s 00:08:23.734 sys 0m0.006s 00:08:23.734 06:18:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:23.734 06:18:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.734 ************************************ 00:08:23.734 END TEST scheduler_create_thread 00:08:23.734 ************************************ 00:08:23.734 06:18:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:23.734 06:18:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2603730 00:08:23.734 06:18:43 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2603730 ']' 00:08:23.734 06:18:43 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2603730 00:08:23.734 06:18:43 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:08:23.734 06:18:43 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:23.734 06:18:43 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2603730 00:08:23.995 06:18:44 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:23.995 06:18:44 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:23.995 06:18:44 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2603730' 00:08:23.995 killing process with pid 2603730 00:08:23.995 06:18:44 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2603730 00:08:23.995 06:18:44 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2603730 00:08:23.995 [2024-11-20 06:18:44.214747] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:24.256 00:08:24.256 real 0m5.817s 00:08:24.256 user 0m12.888s 00:08:24.256 sys 0m0.412s 00:08:24.256 06:18:44 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.256 06:18:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:24.256 ************************************ 00:08:24.256 END TEST event_scheduler 00:08:24.256 ************************************ 00:08:24.256 06:18:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:24.256 06:18:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:24.256 06:18:44 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:24.256 06:18:44 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.256 06:18:44 event -- common/autotest_common.sh@10 -- # set +x 00:08:24.256 ************************************ 00:08:24.256 START TEST app_repeat 00:08:24.256 ************************************ 00:08:24.256 06:18:44 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2604795 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2604795' 00:08:24.256 Process app_repeat pid: 2604795 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:24.256 spdk_app_start Round 0 00:08:24.256 06:18:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2604795 /var/tmp/spdk-nbd.sock 00:08:24.256 06:18:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2604795 ']' 00:08:24.256 06:18:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:24.256 06:18:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.256 06:18:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:24.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:24.256 06:18:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.256 06:18:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:24.256 [2024-11-20 06:18:44.493044] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:24.256 [2024-11-20 06:18:44.493107] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604795 ] 00:08:24.517 [2024-11-20 06:18:44.549298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.517 [2024-11-20 06:18:44.579541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.517 [2024-11-20 06:18:44.579542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.517 06:18:44 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:24.517 06:18:44 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:24.517 06:18:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.779 Malloc0 00:08:24.779 06:18:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.779 Malloc1 00:08:24.779 06:18:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.779 06:18:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:25.040 /dev/nbd0 00:08:25.040 06:18:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:25.040 06:18:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.040 1+0 records in 00:08:25.040 1+0 records out 00:08:25.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288039 s, 14.2 MB/s 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:25.040 06:18:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:25.040 06:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.040 06:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.040 06:18:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:25.302 /dev/nbd1 00:08:25.302 06:18:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:25.302 06:18:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.302 1+0 records in 00:08:25.302 1+0 records out 00:08:25.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292606 s, 14.0 MB/s 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:25.302 06:18:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:25.302 06:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.302 06:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.302 06:18:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.302 06:18:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.302 06:18:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:25.564 { 00:08:25.564 "nbd_device": "/dev/nbd0", 00:08:25.564 "bdev_name": "Malloc0" 00:08:25.564 }, 00:08:25.564 { 00:08:25.564 "nbd_device": "/dev/nbd1", 00:08:25.564 "bdev_name": "Malloc1" 00:08:25.564 } 00:08:25.564 ]' 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:25.564 { 00:08:25.564 "nbd_device": "/dev/nbd0", 00:08:25.564 "bdev_name": "Malloc0" 00:08:25.564 }, 00:08:25.564 { 00:08:25.564 "nbd_device": "/dev/nbd1", 00:08:25.564 "bdev_name": "Malloc1" 00:08:25.564 } 00:08:25.564 ]' 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:25.564 /dev/nbd1' 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:25.564 /dev/nbd1' 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:25.564 256+0 records in 00:08:25.564 256+0 records out 00:08:25.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127227 s, 82.4 MB/s 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:25.564 256+0 records in 00:08:25.564 256+0 records out 00:08:25.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120083 s, 87.3 MB/s 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:25.564 256+0 records in 00:08:25.564 256+0 records out 00:08:25.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133557 s, 78.5 MB/s 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.564 06:18:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.565 06:18:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.827 06:18:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.827 06:18:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.827 06:18:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.827 06:18:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.827 06:18:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.827 06:18:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.827 06:18:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:25.827 06:18:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.827 06:18:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.827 06:18:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.088 06:18:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:26.349 06:18:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:26.349 06:18:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:26.611 06:18:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:26.611 [2024-11-20 06:18:46.747645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:26.611 [2024-11-20 06:18:46.776303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.611 [2024-11-20 06:18:46.776452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.611 [2024-11-20 06:18:46.805440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:26.611 [2024-11-20 06:18:46.805470] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:29.912 06:18:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:29.912 06:18:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:29.912 spdk_app_start Round 1 00:08:29.912 06:18:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2604795 /var/tmp/spdk-nbd.sock 00:08:29.912 06:18:49 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2604795 ']' 00:08:29.912 06:18:49 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:29.912 06:18:49 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:29.913 06:18:49 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:29.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:29.913 06:18:49 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:29.913 06:18:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:29.913 06:18:49 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.913 06:18:49 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:29.913 06:18:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:29.913 Malloc0 00:08:29.913 06:18:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:30.173 Malloc1 00:08:30.173 06:18:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.173 06:18:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:30.173 /dev/nbd0 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:30.434 1+0 records in 00:08:30.434 1+0 records out 00:08:30.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279295 s, 14.7 MB/s 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:30.434 /dev/nbd1 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:30.434 1+0 records in 00:08:30.434 1+0 records out 00:08:30.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273796 s, 15.0 MB/s 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:30.434 06:18:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.434 06:18:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:30.696 { 00:08:30.696 "nbd_device": "/dev/nbd0", 00:08:30.696 "bdev_name": "Malloc0" 00:08:30.696 }, 00:08:30.696 { 00:08:30.696 "nbd_device": "/dev/nbd1", 00:08:30.696 "bdev_name": "Malloc1" 00:08:30.696 } 00:08:30.696 ]' 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:30.696 { 00:08:30.696 "nbd_device": "/dev/nbd0", 00:08:30.696 "bdev_name": "Malloc0" 00:08:30.696 }, 00:08:30.696 { 00:08:30.696 "nbd_device": "/dev/nbd1", 00:08:30.696 "bdev_name": "Malloc1" 00:08:30.696 } 00:08:30.696 ]' 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:30.696 /dev/nbd1' 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:30.696 /dev/nbd1' 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:30.696 256+0 records in 00:08:30.696 256+0 records out 00:08:30.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127623 s, 82.2 MB/s 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:30.696 256+0 records in 00:08:30.696 256+0 records out 00:08:30.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118423 s, 88.5 MB/s 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.696 06:18:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:30.957 256+0 records in 00:08:30.957 256+0 records out 00:08:30.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136029 s, 77.1 MB/s 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.957 06:18:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:30.957 06:18:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:30.957 06:18:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:30.957 06:18:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:30.957 06:18:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.957 06:18:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.957 06:18:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:30.957 06:18:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:30.957 06:18:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.957 06:18:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.957 06:18:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:31.218 06:18:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:31.218 06:18:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:31.218 06:18:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:31.218 06:18:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.218 06:18:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.218 06:18:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:31.218 06:18:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:31.218 06:18:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.218 06:18:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.218 06:18:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.219 06:18:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:31.480 06:18:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:31.480 06:18:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:31.740 06:18:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:31.740 [2024-11-20 06:18:51.888457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:31.740 [2024-11-20 06:18:51.918357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.740 [2024-11-20 06:18:51.918358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.740 [2024-11-20 06:18:51.948007] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:31.740 [2024-11-20 06:18:51.948036] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:35.040 06:18:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:35.040 06:18:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:35.040 spdk_app_start Round 2 00:08:35.040 06:18:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2604795 /var/tmp/spdk-nbd.sock 00:08:35.040 06:18:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2604795 ']' 00:08:35.040 06:18:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:35.040 06:18:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:35.040 06:18:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:35.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:35.040 06:18:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:35.040 06:18:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:35.040 06:18:55 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:35.040 06:18:55 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:35.040 06:18:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:35.040 Malloc0 00:08:35.040 06:18:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:35.300 Malloc1 00:08:35.300 06:18:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.300 06:18:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:35.300 /dev/nbd0 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.562 1+0 records in 00:08:35.562 1+0 records out 00:08:35.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279617 s, 14.6 MB/s 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:35.562 /dev/nbd1 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.562 1+0 records in 00:08:35.562 1+0 records out 00:08:35.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274027 s, 14.9 MB/s 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:35.562 06:18:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.562 06:18:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:35.823 { 00:08:35.823 "nbd_device": "/dev/nbd0", 00:08:35.823 "bdev_name": "Malloc0" 00:08:35.823 }, 00:08:35.823 { 00:08:35.823 "nbd_device": "/dev/nbd1", 00:08:35.823 "bdev_name": "Malloc1" 00:08:35.823 } 00:08:35.823 ]' 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:35.823 { 00:08:35.823 "nbd_device": "/dev/nbd0", 00:08:35.823 "bdev_name": "Malloc0" 00:08:35.823 }, 00:08:35.823 { 00:08:35.823 "nbd_device": "/dev/nbd1", 00:08:35.823 "bdev_name": "Malloc1" 00:08:35.823 } 00:08:35.823 ]' 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:35.823 /dev/nbd1' 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:35.823 /dev/nbd1' 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:35.823 256+0 records in 00:08:35.823 256+0 records out 00:08:35.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121611 s, 86.2 MB/s 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.823 06:18:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:36.083 256+0 records in 00:08:36.083 256+0 records out 00:08:36.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123562 s, 84.9 MB/s 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:36.083 256+0 records in 00:08:36.083 256+0 records out 00:08:36.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129267 s, 81.1 MB/s 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.083 06:18:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.343 06:18:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:36.604 06:18:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:36.604 06:18:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:36.864 06:18:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:36.864 [2024-11-20 06:18:57.070023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.864 [2024-11-20 06:18:57.098527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.864 [2024-11-20 06:18:57.098528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.864 [2024-11-20 06:18:57.128134] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:36.864 [2024-11-20 06:18:57.128169] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:40.161 06:18:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2604795 /var/tmp/spdk-nbd.sock 00:08:40.161 06:18:59 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2604795 ']' 00:08:40.161 06:18:59 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.161 06:18:59 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:40.161 06:18:59 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.161 06:18:59 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:40.161 06:18:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:40.161 06:19:00 event.app_repeat -- event/event.sh@39 -- # killprocess 2604795 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2604795 ']' 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2604795 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2604795 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2604795' 00:08:40.161 killing process with pid 2604795 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2604795 00:08:40.161 06:19:00 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2604795 00:08:40.161 spdk_app_start is called in Round 0. 00:08:40.162 Shutdown signal received, stop current app iteration 00:08:40.162 Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 reinitialization... 00:08:40.162 spdk_app_start is called in Round 1. 00:08:40.162 Shutdown signal received, stop current app iteration 00:08:40.162 Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 reinitialization... 00:08:40.162 spdk_app_start is called in Round 2. 00:08:40.162 Shutdown signal received, stop current app iteration 00:08:40.162 Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 reinitialization... 00:08:40.162 spdk_app_start is called in Round 3. 00:08:40.162 Shutdown signal received, stop current app iteration 00:08:40.162 06:19:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:40.162 06:19:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:40.162 00:08:40.162 real 0m15.875s 00:08:40.162 user 0m34.954s 00:08:40.162 sys 0m2.296s 00:08:40.162 06:19:00 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:40.162 06:19:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:40.162 ************************************ 00:08:40.162 END TEST app_repeat 00:08:40.162 ************************************ 00:08:40.162 06:19:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:40.162 06:19:00 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:40.162 06:19:00 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:40.162 06:19:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.162 06:19:00 event -- common/autotest_common.sh@10 -- # set +x 00:08:40.162 ************************************ 00:08:40.162 START TEST cpu_locks 00:08:40.162 ************************************ 00:08:40.162 06:19:00 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:40.423 * Looking for test storage... 00:08:40.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:40.423 06:19:00 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:40.423 06:19:00 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:08:40.423 06:19:00 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:40.423 06:19:00 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.423 06:19:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:40.423 06:19:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.423 06:19:00 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:40.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.423 --rc genhtml_branch_coverage=1 00:08:40.423 --rc genhtml_function_coverage=1 00:08:40.423 --rc genhtml_legend=1 00:08:40.423 --rc geninfo_all_blocks=1 00:08:40.423 --rc geninfo_unexecuted_blocks=1 00:08:40.423 00:08:40.423 ' 00:08:40.423 06:19:00 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:40.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.423 --rc genhtml_branch_coverage=1 00:08:40.423 --rc genhtml_function_coverage=1 00:08:40.423 --rc genhtml_legend=1 00:08:40.423 --rc geninfo_all_blocks=1 00:08:40.423 --rc geninfo_unexecuted_blocks=1 00:08:40.423 00:08:40.423 ' 00:08:40.423 06:19:00 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:40.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.423 --rc genhtml_branch_coverage=1 00:08:40.423 --rc genhtml_function_coverage=1 00:08:40.423 --rc genhtml_legend=1 00:08:40.423 --rc geninfo_all_blocks=1 00:08:40.423 --rc geninfo_unexecuted_blocks=1 00:08:40.423 00:08:40.423 ' 00:08:40.423 06:19:00 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:40.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.423 --rc genhtml_branch_coverage=1 00:08:40.423 --rc genhtml_function_coverage=1 00:08:40.423 --rc genhtml_legend=1 00:08:40.424 --rc geninfo_all_blocks=1 00:08:40.424 --rc geninfo_unexecuted_blocks=1 00:08:40.424 00:08:40.424 ' 00:08:40.424 06:19:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:40.424 06:19:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:40.424 06:19:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:40.424 06:19:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:40.424 06:19:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:40.424 06:19:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.424 06:19:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:40.424 ************************************ 00:08:40.424 START TEST default_locks 00:08:40.424 ************************************ 00:08:40.424 06:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:08:40.424 06:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2608355 00:08:40.424 06:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2608355 00:08:40.424 06:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2608355 ']' 00:08:40.424 06:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:40.424 06:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.424 06:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:40.424 06:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.424 06:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:40.424 06:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 [2024-11-20 06:19:00.733032] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:40.685 [2024-11-20 06:19:00.733099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608355 ] 00:08:40.685 [2024-11-20 06:19:00.820866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.685 [2024-11-20 06:19:00.857346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.255 06:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:41.255 06:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:08:41.255 06:19:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2608355 00:08:41.255 06:19:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2608355 00:08:41.255 06:19:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:41.826 lslocks: write error 00:08:41.826 06:19:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2608355 00:08:41.826 06:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2608355 ']' 00:08:41.826 06:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2608355 00:08:41.826 06:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:08:41.826 06:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:41.826 06:19:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2608355 00:08:41.826 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:41.826 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:41.826 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2608355' 00:08:41.826 killing process with pid 2608355 00:08:41.826 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2608355 00:08:41.826 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2608355 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2608355 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2608355 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2608355 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2608355 ']' 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2608355) - No such process 00:08:42.087 ERROR: process (pid: 2608355) is no longer running 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:42.087 00:08:42.087 real 0m1.563s 00:08:42.087 user 0m1.662s 00:08:42.087 sys 0m0.566s 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.087 06:19:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.087 ************************************ 00:08:42.087 END TEST default_locks 00:08:42.087 ************************************ 00:08:42.087 06:19:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:42.087 06:19:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:42.087 06:19:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.087 06:19:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.087 ************************************ 00:08:42.087 START TEST default_locks_via_rpc 00:08:42.087 ************************************ 00:08:42.087 06:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:08:42.087 06:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2608688 00:08:42.087 06:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2608688 00:08:42.087 06:19:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:42.087 06:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2608688 ']' 00:08:42.087 06:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.087 06:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:42.087 06:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.087 06:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:42.087 06:19:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.087 [2024-11-20 06:19:02.357388] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:42.087 [2024-11-20 06:19:02.357452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608688 ] 00:08:42.348 [2024-11-20 06:19:02.446996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.348 [2024-11-20 06:19:02.485788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:42.918 06:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:42.919 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.919 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.919 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.919 06:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2608688 00:08:42.919 06:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2608688 00:08:42.919 06:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2608688 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2608688 ']' 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2608688 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2608688 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2608688' 00:08:43.489 killing process with pid 2608688 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2608688 00:08:43.489 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2608688 00:08:43.750 00:08:43.750 real 0m1.526s 00:08:43.750 user 0m1.663s 00:08:43.750 sys 0m0.526s 00:08:43.750 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:43.750 06:19:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.750 ************************************ 00:08:43.750 END TEST default_locks_via_rpc 00:08:43.750 ************************************ 00:08:43.750 06:19:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:43.750 06:19:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:43.750 06:19:03 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:43.750 06:19:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:43.750 ************************************ 00:08:43.750 START TEST non_locking_app_on_locked_coremask 00:08:43.750 ************************************ 00:08:43.750 06:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:08:43.750 06:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2609010 00:08:43.750 06:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2609010 /var/tmp/spdk.sock 00:08:43.750 06:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:43.750 06:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2609010 ']' 00:08:43.750 06:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.750 06:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:43.750 06:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.750 06:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:43.750 06:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:43.750 [2024-11-20 06:19:03.970893] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:43.750 [2024-11-20 06:19:03.970951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609010 ] 00:08:44.010 [2024-11-20 06:19:04.055834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.010 [2024-11-20 06:19:04.089503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2609131 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2609131 /var/tmp/spdk2.sock 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2609131 ']' 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:44.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:44.580 06:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.580 [2024-11-20 06:19:04.807733] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:44.580 [2024-11-20 06:19:04.807784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609131 ] 00:08:44.840 [2024-11-20 06:19:04.896006] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:44.840 [2024-11-20 06:19:04.896031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.840 [2024-11-20 06:19:04.954296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.411 06:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:45.411 06:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:45.411 06:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2609010 00:08:45.411 06:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2609010 00:08:45.411 06:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:45.982 lslocks: write error 00:08:45.982 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2609010 00:08:45.982 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2609010 ']' 00:08:45.982 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2609010 00:08:45.982 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:45.982 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:45.982 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2609010 00:08:46.243 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:46.243 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:46.243 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2609010' 00:08:46.243 killing process with pid 2609010 00:08:46.243 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2609010 00:08:46.243 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2609010 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2609131 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2609131 ']' 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2609131 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2609131 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2609131' 00:08:46.503 killing process with pid 2609131 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2609131 00:08:46.503 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2609131 00:08:46.764 00:08:46.764 real 0m2.976s 00:08:46.764 user 0m3.304s 00:08:46.764 sys 0m0.937s 00:08:46.764 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.764 06:19:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:46.764 ************************************ 00:08:46.764 END TEST non_locking_app_on_locked_coremask 00:08:46.764 ************************************ 00:08:46.764 06:19:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:46.764 06:19:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:46.764 06:19:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.764 06:19:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:46.764 ************************************ 00:08:46.764 START TEST locking_app_on_unlocked_coremask 00:08:46.764 ************************************ 00:08:46.764 06:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:08:46.764 06:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2609546 00:08:46.764 06:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2609546 /var/tmp/spdk.sock 00:08:46.764 06:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:46.764 06:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2609546 ']' 00:08:46.764 06:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.764 06:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:46.764 06:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.764 06:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:46.764 06:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:46.764 [2024-11-20 06:19:07.014494] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:46.764 [2024-11-20 06:19:07.014553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609546 ] 00:08:47.025 [2024-11-20 06:19:07.099277] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:47.025 [2024-11-20 06:19:07.099306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.025 [2024-11-20 06:19:07.133330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2609837 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2609837 /var/tmp/spdk2.sock 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2609837 ']' 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:47.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:47.596 06:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:47.596 [2024-11-20 06:19:07.855079] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:47.596 [2024-11-20 06:19:07.855132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609837 ] 00:08:47.856 [2024-11-20 06:19:07.941787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.856 [2024-11-20 06:19:07.999857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.427 06:19:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:48.427 06:19:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:48.427 06:19:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2609837 00:08:48.427 06:19:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:48.427 06:19:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2609837 00:08:49.095 lslocks: write error 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2609546 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2609546 ']' 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2609546 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2609546 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2609546' 00:08:49.095 killing process with pid 2609546 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2609546 00:08:49.095 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2609546 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2609837 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2609837 ']' 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2609837 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2609837 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2609837' 00:08:49.667 killing process with pid 2609837 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2609837 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2609837 00:08:49.667 00:08:49.667 real 0m2.986s 00:08:49.667 user 0m3.317s 00:08:49.667 sys 0m0.917s 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.667 06:19:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.667 ************************************ 00:08:49.667 END TEST locking_app_on_unlocked_coremask 00:08:49.667 ************************************ 00:08:49.928 06:19:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:49.928 06:19:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:49.928 06:19:09 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.928 06:19:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.928 ************************************ 00:08:49.928 START TEST locking_app_on_locked_coremask 00:08:49.928 ************************************ 00:08:49.928 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:08:49.928 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2610218 00:08:49.928 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2610218 /var/tmp/spdk.sock 00:08:49.928 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:49.928 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2610218 ']' 00:08:49.928 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.928 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:49.928 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.928 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:49.928 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.928 [2024-11-20 06:19:10.072666] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:49.928 [2024-11-20 06:19:10.072722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610218 ] 00:08:49.928 [2024-11-20 06:19:10.155535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.928 [2024-11-20 06:19:10.186041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2610541 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2610541 /var/tmp/spdk2.sock 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2610541 /var/tmp/spdk2.sock 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2610541 /var/tmp/spdk2.sock 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2610541 ']' 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:50.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:50.869 06:19:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.869 [2024-11-20 06:19:10.929359] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:50.869 [2024-11-20 06:19:10.929413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610541 ] 00:08:50.869 [2024-11-20 06:19:11.017195] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2610218 has claimed it. 00:08:50.869 [2024-11-20 06:19:11.017227] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:51.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2610541) - No such process 00:08:51.440 ERROR: process (pid: 2610541) is no longer running 00:08:51.440 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:51.440 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:51.440 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:51.440 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.440 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.440 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.440 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2610218 00:08:51.440 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2610218 00:08:51.440 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:51.701 lslocks: write error 00:08:51.701 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2610218 00:08:51.701 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2610218 ']' 00:08:51.701 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2610218 00:08:51.701 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:51.701 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:51.701 06:19:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2610218 00:08:51.962 06:19:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:51.962 06:19:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:51.962 06:19:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2610218' 00:08:51.962 killing process with pid 2610218 00:08:51.962 06:19:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2610218 00:08:51.962 06:19:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2610218 00:08:51.962 00:08:51.962 real 0m2.196s 00:08:51.962 user 0m2.487s 00:08:51.962 sys 0m0.621s 00:08:51.962 06:19:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.962 06:19:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.962 ************************************ 00:08:51.962 END TEST locking_app_on_locked_coremask 00:08:51.962 ************************************ 00:08:52.223 06:19:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:52.223 06:19:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:52.223 06:19:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:52.223 06:19:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:52.223 ************************************ 00:08:52.223 START TEST locking_overlapped_coremask 00:08:52.223 ************************************ 00:08:52.223 06:19:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:08:52.223 06:19:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2610764 00:08:52.223 06:19:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2610764 /var/tmp/spdk.sock 00:08:52.223 06:19:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:52.223 06:19:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2610764 ']' 00:08:52.223 06:19:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.223 06:19:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:52.223 06:19:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.223 06:19:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:52.223 06:19:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.223 [2024-11-20 06:19:12.350220] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:52.223 [2024-11-20 06:19:12.350279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610764 ] 00:08:52.223 [2024-11-20 06:19:12.436286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.223 [2024-11-20 06:19:12.478805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.223 [2024-11-20 06:19:12.478958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.223 [2024-11-20 06:19:12.478959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2610931 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2610931 /var/tmp/spdk2.sock 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2610931 /var/tmp/spdk2.sock 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2610931 /var/tmp/spdk2.sock 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2610931 ']' 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:53.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.166 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.166 [2024-11-20 06:19:13.210808] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:53.166 [2024-11-20 06:19:13.210860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610931 ] 00:08:53.166 [2024-11-20 06:19:13.323462] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2610764 has claimed it. 00:08:53.166 [2024-11-20 06:19:13.323503] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:53.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2610931) - No such process 00:08:53.738 ERROR: process (pid: 2610931) is no longer running 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2610764 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2610764 ']' 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2610764 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2610764 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2610764' 00:08:53.738 killing process with pid 2610764 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2610764 00:08:53.738 06:19:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2610764 00:08:53.999 00:08:53.999 real 0m1.787s 00:08:53.999 user 0m5.143s 00:08:53.999 sys 0m0.411s 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.999 ************************************ 00:08:53.999 END TEST locking_overlapped_coremask 00:08:53.999 ************************************ 00:08:53.999 06:19:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:53.999 06:19:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:53.999 06:19:14 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.999 06:19:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:53.999 ************************************ 00:08:53.999 START TEST locking_overlapped_coremask_via_rpc 00:08:53.999 ************************************ 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2611217 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2611217 /var/tmp/spdk.sock 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2611217 ']' 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.999 06:19:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.999 [2024-11-20 06:19:14.221921] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:53.999 [2024-11-20 06:19:14.221981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2611217 ] 00:08:54.259 [2024-11-20 06:19:14.309854] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:54.259 [2024-11-20 06:19:14.309897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.259 [2024-11-20 06:19:14.351416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.259 [2024-11-20 06:19:14.351628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.259 [2024-11-20 06:19:14.351628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2611307 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2611307 /var/tmp/spdk2.sock 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2611307 ']' 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:54.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:54.829 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.829 [2024-11-20 06:19:15.071327] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:54.829 [2024-11-20 06:19:15.071379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2611307 ] 00:08:55.090 [2024-11-20 06:19:15.184714] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:55.090 [2024-11-20 06:19:15.184747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:55.090 [2024-11-20 06:19:15.262527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.090 [2024-11-20 06:19:15.262684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.090 [2024-11-20 06:19:15.262686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.661 [2024-11-20 06:19:15.863244] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2611217 has claimed it. 00:08:55.661 request: 00:08:55.661 { 00:08:55.661 "method": "framework_enable_cpumask_locks", 00:08:55.661 "req_id": 1 00:08:55.661 } 00:08:55.661 Got JSON-RPC error response 00:08:55.661 response: 00:08:55.661 { 00:08:55.661 "code": -32603, 00:08:55.661 "message": "Failed to claim CPU core: 2" 00:08:55.661 } 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2611217 /var/tmp/spdk.sock 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2611217 ']' 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.661 06:19:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.922 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:55.922 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:55.922 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2611307 /var/tmp/spdk2.sock 00:08:55.922 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2611307 ']' 00:08:55.922 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:55.922 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.922 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:55.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:55.922 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.922 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.182 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.182 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:56.182 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:56.182 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:56.182 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:56.182 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:56.182 00:08:56.182 real 0m2.084s 00:08:56.182 user 0m0.886s 00:08:56.182 sys 0m0.126s 00:08:56.183 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.183 06:19:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.183 ************************************ 00:08:56.183 END TEST locking_overlapped_coremask_via_rpc 00:08:56.183 ************************************ 00:08:56.183 06:19:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:56.183 06:19:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2611217 ]] 00:08:56.183 06:19:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2611217 00:08:56.183 06:19:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2611217 ']' 00:08:56.183 06:19:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2611217 00:08:56.183 06:19:16 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:56.183 06:19:16 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:56.183 06:19:16 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2611217 00:08:56.183 06:19:16 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:56.183 06:19:16 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:56.183 06:19:16 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2611217' 00:08:56.183 killing process with pid 2611217 00:08:56.183 06:19:16 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2611217 00:08:56.183 06:19:16 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2611217 00:08:56.445 06:19:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2611307 ]] 00:08:56.445 06:19:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2611307 00:08:56.445 06:19:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2611307 ']' 00:08:56.445 06:19:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2611307 00:08:56.445 06:19:16 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:56.445 06:19:16 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:56.445 06:19:16 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2611307 00:08:56.445 06:19:16 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:56.445 06:19:16 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:56.445 06:19:16 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2611307' 00:08:56.445 killing process with pid 2611307 00:08:56.445 06:19:16 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2611307 00:08:56.445 06:19:16 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2611307 00:08:56.705 06:19:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:56.705 06:19:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:56.705 06:19:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2611217 ]] 00:08:56.705 06:19:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2611217 00:08:56.705 06:19:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2611217 ']' 00:08:56.705 06:19:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2611217 00:08:56.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2611217) - No such process 00:08:56.705 06:19:16 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2611217 is not found' 00:08:56.705 Process with pid 2611217 is not found 00:08:56.705 06:19:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2611307 ]] 00:08:56.705 06:19:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2611307 00:08:56.705 06:19:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2611307 ']' 00:08:56.705 06:19:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2611307 00:08:56.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2611307) - No such process 00:08:56.705 06:19:16 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2611307 is not found' 00:08:56.705 Process with pid 2611307 is not found 00:08:56.705 06:19:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:56.705 00:08:56.705 real 0m16.398s 00:08:56.705 user 0m28.438s 00:08:56.705 sys 0m5.084s 00:08:56.705 06:19:16 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.705 06:19:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:56.705 ************************************ 00:08:56.705 END TEST cpu_locks 00:08:56.705 ************************************ 00:08:56.705 00:08:56.705 real 0m42.282s 00:08:56.705 user 1m22.827s 00:08:56.705 sys 0m8.461s 00:08:56.705 06:19:16 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.705 06:19:16 event -- common/autotest_common.sh@10 -- # set +x 00:08:56.705 ************************************ 00:08:56.705 END TEST event 00:08:56.705 ************************************ 00:08:56.705 06:19:16 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:56.705 06:19:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:56.705 06:19:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:56.705 06:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:56.705 ************************************ 00:08:56.705 START TEST thread 00:08:56.705 ************************************ 00:08:56.705 06:19:16 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:56.966 * Looking for test storage... 00:08:56.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:56.966 06:19:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.966 06:19:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.966 06:19:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.966 06:19:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.966 06:19:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.966 06:19:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.966 06:19:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.966 06:19:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.966 06:19:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.966 06:19:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.966 06:19:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.966 06:19:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:56.966 06:19:17 thread -- scripts/common.sh@345 -- # : 1 00:08:56.966 06:19:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.966 06:19:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.966 06:19:17 thread -- scripts/common.sh@365 -- # decimal 1 00:08:56.966 06:19:17 thread -- scripts/common.sh@353 -- # local d=1 00:08:56.966 06:19:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.966 06:19:17 thread -- scripts/common.sh@355 -- # echo 1 00:08:56.966 06:19:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.966 06:19:17 thread -- scripts/common.sh@366 -- # decimal 2 00:08:56.966 06:19:17 thread -- scripts/common.sh@353 -- # local d=2 00:08:56.966 06:19:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.966 06:19:17 thread -- scripts/common.sh@355 -- # echo 2 00:08:56.966 06:19:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.966 06:19:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.966 06:19:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.966 06:19:17 thread -- scripts/common.sh@368 -- # return 0 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:56.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.966 --rc genhtml_branch_coverage=1 00:08:56.966 --rc genhtml_function_coverage=1 00:08:56.966 --rc genhtml_legend=1 00:08:56.966 --rc geninfo_all_blocks=1 00:08:56.966 --rc geninfo_unexecuted_blocks=1 00:08:56.966 00:08:56.966 ' 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:56.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.966 --rc genhtml_branch_coverage=1 00:08:56.966 --rc genhtml_function_coverage=1 00:08:56.966 --rc genhtml_legend=1 00:08:56.966 --rc geninfo_all_blocks=1 00:08:56.966 --rc geninfo_unexecuted_blocks=1 00:08:56.966 00:08:56.966 ' 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:56.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.966 --rc genhtml_branch_coverage=1 00:08:56.966 --rc genhtml_function_coverage=1 00:08:56.966 --rc genhtml_legend=1 00:08:56.966 --rc geninfo_all_blocks=1 00:08:56.966 --rc geninfo_unexecuted_blocks=1 00:08:56.966 00:08:56.966 ' 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:56.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.966 --rc genhtml_branch_coverage=1 00:08:56.966 --rc genhtml_function_coverage=1 00:08:56.966 --rc genhtml_legend=1 00:08:56.966 --rc geninfo_all_blocks=1 00:08:56.966 --rc geninfo_unexecuted_blocks=1 00:08:56.966 00:08:56.966 ' 00:08:56.966 06:19:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:56.966 06:19:17 thread -- common/autotest_common.sh@10 -- # set +x 00:08:56.966 ************************************ 00:08:56.966 START TEST thread_poller_perf 00:08:56.966 ************************************ 00:08:56.966 06:19:17 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:56.966 [2024-11-20 06:19:17.185946] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:56.966 [2024-11-20 06:19:17.186064] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2611805 ] 00:08:57.226 [2024-11-20 06:19:17.275097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.226 [2024-11-20 06:19:17.316054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.226 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:58.165 [2024-11-20T05:19:18.444Z] ====================================== 00:08:58.165 [2024-11-20T05:19:18.444Z] busy:2409603736 (cyc) 00:08:58.165 [2024-11-20T05:19:18.444Z] total_run_count: 418000 00:08:58.165 [2024-11-20T05:19:18.444Z] tsc_hz: 2400000000 (cyc) 00:08:58.165 [2024-11-20T05:19:18.444Z] ====================================== 00:08:58.165 [2024-11-20T05:19:18.444Z] poller_cost: 5764 (cyc), 2401 (nsec) 00:08:58.165 00:08:58.165 real 0m1.186s 00:08:58.165 user 0m1.104s 00:08:58.165 sys 0m0.077s 00:08:58.165 06:19:18 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:58.165 06:19:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:58.165 ************************************ 00:08:58.166 END TEST thread_poller_perf 00:08:58.166 ************************************ 00:08:58.166 06:19:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:58.166 06:19:18 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:58.166 06:19:18 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.166 06:19:18 thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.166 ************************************ 00:08:58.166 START TEST thread_poller_perf 00:08:58.166 ************************************ 00:08:58.166 06:19:18 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:58.426 [2024-11-20 06:19:18.449001] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:58.426 [2024-11-20 06:19:18.449097] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2612106 ] 00:08:58.426 [2024-11-20 06:19:18.535118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.426 [2024-11-20 06:19:18.564697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.426 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:59.366 [2024-11-20T05:19:19.645Z] ====================================== 00:08:59.366 [2024-11-20T05:19:19.645Z] busy:2401296394 (cyc) 00:08:59.366 [2024-11-20T05:19:19.645Z] total_run_count: 5553000 00:08:59.366 [2024-11-20T05:19:19.645Z] tsc_hz: 2400000000 (cyc) 00:08:59.366 [2024-11-20T05:19:19.645Z] ====================================== 00:08:59.366 [2024-11-20T05:19:19.645Z] poller_cost: 432 (cyc), 180 (nsec) 00:08:59.366 00:08:59.366 real 0m1.166s 00:08:59.366 user 0m1.091s 00:08:59.366 sys 0m0.071s 00:08:59.366 06:19:19 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.366 06:19:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:59.366 ************************************ 00:08:59.366 END TEST thread_poller_perf 00:08:59.366 ************************************ 00:08:59.367 06:19:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:59.367 00:08:59.367 real 0m2.705s 00:08:59.367 user 0m2.382s 00:08:59.367 sys 0m0.336s 00:08:59.367 06:19:19 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.367 06:19:19 thread -- common/autotest_common.sh@10 -- # set +x 00:08:59.367 ************************************ 00:08:59.367 END TEST thread 00:08:59.367 ************************************ 00:08:59.629 06:19:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:59.629 06:19:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:59.629 06:19:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:59.629 06:19:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.629 06:19:19 -- common/autotest_common.sh@10 -- # set +x 00:08:59.629 ************************************ 00:08:59.629 START TEST app_cmdline 00:08:59.629 ************************************ 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:59.629 * Looking for test storage... 00:08:59.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.629 06:19:19 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:59.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.629 --rc genhtml_branch_coverage=1 00:08:59.629 --rc genhtml_function_coverage=1 00:08:59.629 --rc genhtml_legend=1 00:08:59.629 --rc geninfo_all_blocks=1 00:08:59.629 --rc geninfo_unexecuted_blocks=1 00:08:59.629 00:08:59.629 ' 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:59.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.629 --rc genhtml_branch_coverage=1 00:08:59.629 --rc genhtml_function_coverage=1 00:08:59.629 --rc genhtml_legend=1 00:08:59.629 --rc geninfo_all_blocks=1 00:08:59.629 --rc geninfo_unexecuted_blocks=1 00:08:59.629 00:08:59.629 ' 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:59.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.629 --rc genhtml_branch_coverage=1 00:08:59.629 --rc genhtml_function_coverage=1 00:08:59.629 --rc genhtml_legend=1 00:08:59.629 --rc geninfo_all_blocks=1 00:08:59.629 --rc geninfo_unexecuted_blocks=1 00:08:59.629 00:08:59.629 ' 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:59.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.629 --rc genhtml_branch_coverage=1 00:08:59.629 --rc genhtml_function_coverage=1 00:08:59.629 --rc genhtml_legend=1 00:08:59.629 --rc geninfo_all_blocks=1 00:08:59.629 --rc geninfo_unexecuted_blocks=1 00:08:59.629 00:08:59.629 ' 00:08:59.629 06:19:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:59.629 06:19:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2612503 00:08:59.629 06:19:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2612503 00:08:59.629 06:19:19 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2612503 ']' 00:08:59.629 06:19:19 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.892 06:19:19 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:59.892 06:19:19 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.892 06:19:19 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:59.892 06:19:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:59.892 [2024-11-20 06:19:19.972433] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:08:59.892 [2024-11-20 06:19:19.972484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2612503 ] 00:08:59.892 [2024-11-20 06:19:20.055463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.892 [2024-11-20 06:19:20.086701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.834 06:19:20 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:00.834 06:19:20 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:00.834 { 00:09:00.834 "version": "SPDK v25.01-pre git sha1 ac2633210", 00:09:00.834 "fields": { 00:09:00.834 "major": 25, 00:09:00.834 "minor": 1, 00:09:00.834 "patch": 0, 00:09:00.834 "suffix": "-pre", 00:09:00.834 "commit": "ac2633210" 00:09:00.834 } 00:09:00.834 } 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:00.834 06:19:20 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.834 06:19:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:00.834 06:19:20 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:00.834 06:19:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:00.834 06:19:20 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:00.834 06:19:20 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:00.834 06:19:20 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.835 06:19:20 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.835 06:19:20 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.835 06:19:20 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.835 06:19:20 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.835 06:19:20 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.835 06:19:20 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.835 06:19:20 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:00.835 06:19:20 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.096 request: 00:09:01.096 { 00:09:01.096 "method": "env_dpdk_get_mem_stats", 00:09:01.096 "req_id": 1 00:09:01.096 } 00:09:01.096 Got JSON-RPC error response 00:09:01.096 response: 00:09:01.096 { 00:09:01.096 "code": -32601, 00:09:01.096 "message": "Method not found" 00:09:01.096 } 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.096 06:19:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2612503 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2612503 ']' 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2612503 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2612503 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2612503' 00:09:01.096 killing process with pid 2612503 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@971 -- # kill 2612503 00:09:01.096 06:19:21 app_cmdline -- common/autotest_common.sh@976 -- # wait 2612503 00:09:01.358 00:09:01.358 real 0m1.717s 00:09:01.358 user 0m2.088s 00:09:01.358 sys 0m0.447s 00:09:01.358 06:19:21 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.358 06:19:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:01.358 ************************************ 00:09:01.358 END TEST app_cmdline 00:09:01.358 ************************************ 00:09:01.358 06:19:21 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:01.358 06:19:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:01.358 06:19:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.358 06:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:01.358 ************************************ 00:09:01.358 START TEST version 00:09:01.358 ************************************ 00:09:01.358 06:19:21 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:01.358 * Looking for test storage... 00:09:01.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:01.358 06:19:21 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.358 06:19:21 version -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.358 06:19:21 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.619 06:19:21 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.619 06:19:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.619 06:19:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.619 06:19:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.619 06:19:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.619 06:19:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.619 06:19:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.619 06:19:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.619 06:19:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.619 06:19:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.619 06:19:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.619 06:19:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.619 06:19:21 version -- scripts/common.sh@344 -- # case "$op" in 00:09:01.619 06:19:21 version -- scripts/common.sh@345 -- # : 1 00:09:01.619 06:19:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.619 06:19:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.619 06:19:21 version -- scripts/common.sh@365 -- # decimal 1 00:09:01.619 06:19:21 version -- scripts/common.sh@353 -- # local d=1 00:09:01.619 06:19:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.619 06:19:21 version -- scripts/common.sh@355 -- # echo 1 00:09:01.619 06:19:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.619 06:19:21 version -- scripts/common.sh@366 -- # decimal 2 00:09:01.619 06:19:21 version -- scripts/common.sh@353 -- # local d=2 00:09:01.619 06:19:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.619 06:19:21 version -- scripts/common.sh@355 -- # echo 2 00:09:01.619 06:19:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.619 06:19:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.619 06:19:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.619 06:19:21 version -- scripts/common.sh@368 -- # return 0 00:09:01.619 06:19:21 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.619 06:19:21 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:01.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.619 --rc genhtml_branch_coverage=1 00:09:01.619 --rc genhtml_function_coverage=1 00:09:01.619 --rc genhtml_legend=1 00:09:01.619 --rc geninfo_all_blocks=1 00:09:01.619 --rc geninfo_unexecuted_blocks=1 00:09:01.619 00:09:01.619 ' 00:09:01.619 06:19:21 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:01.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.620 --rc genhtml_branch_coverage=1 00:09:01.620 --rc genhtml_function_coverage=1 00:09:01.620 --rc genhtml_legend=1 00:09:01.620 --rc geninfo_all_blocks=1 00:09:01.620 --rc geninfo_unexecuted_blocks=1 00:09:01.620 00:09:01.620 ' 00:09:01.620 06:19:21 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:01.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.620 --rc genhtml_branch_coverage=1 00:09:01.620 --rc genhtml_function_coverage=1 00:09:01.620 --rc genhtml_legend=1 00:09:01.620 --rc geninfo_all_blocks=1 00:09:01.620 --rc geninfo_unexecuted_blocks=1 00:09:01.620 00:09:01.620 ' 00:09:01.620 06:19:21 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:01.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.620 --rc genhtml_branch_coverage=1 00:09:01.620 --rc genhtml_function_coverage=1 00:09:01.620 --rc genhtml_legend=1 00:09:01.620 --rc geninfo_all_blocks=1 00:09:01.620 --rc geninfo_unexecuted_blocks=1 00:09:01.620 00:09:01.620 ' 00:09:01.620 06:19:21 version -- app/version.sh@17 -- # get_header_version major 00:09:01.620 06:19:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:01.620 06:19:21 version -- app/version.sh@14 -- # cut -f2 00:09:01.620 06:19:21 version -- app/version.sh@14 -- # tr -d '"' 00:09:01.620 06:19:21 version -- app/version.sh@17 -- # major=25 00:09:01.620 06:19:21 version -- app/version.sh@18 -- # get_header_version minor 00:09:01.620 06:19:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:01.620 06:19:21 version -- app/version.sh@14 -- # cut -f2 00:09:01.620 06:19:21 version -- app/version.sh@14 -- # tr -d '"' 00:09:01.620 06:19:21 version -- app/version.sh@18 -- # minor=1 00:09:01.620 06:19:21 version -- app/version.sh@19 -- # get_header_version patch 00:09:01.620 06:19:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:01.620 06:19:21 version -- app/version.sh@14 -- # cut -f2 00:09:01.620 06:19:21 version -- app/version.sh@14 -- # tr -d '"' 00:09:01.620 06:19:21 version -- app/version.sh@19 -- # patch=0 00:09:01.620 06:19:21 version -- app/version.sh@20 -- # get_header_version suffix 00:09:01.620 06:19:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:01.620 06:19:21 version -- app/version.sh@14 -- # cut -f2 00:09:01.620 06:19:21 version -- app/version.sh@14 -- # tr -d '"' 00:09:01.620 06:19:21 version -- app/version.sh@20 -- # suffix=-pre 00:09:01.620 06:19:21 version -- app/version.sh@22 -- # version=25.1 00:09:01.620 06:19:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:01.620 06:19:21 version -- app/version.sh@28 -- # version=25.1rc0 00:09:01.620 06:19:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:01.620 06:19:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:01.620 06:19:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:01.620 06:19:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:01.620 00:09:01.620 real 0m0.276s 00:09:01.620 user 0m0.171s 00:09:01.620 sys 0m0.154s 00:09:01.620 06:19:21 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.620 06:19:21 version -- common/autotest_common.sh@10 -- # set +x 00:09:01.620 ************************************ 00:09:01.620 END TEST version 00:09:01.620 ************************************ 00:09:01.620 06:19:21 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:01.620 06:19:21 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:01.620 06:19:21 -- spdk/autotest.sh@194 -- # uname -s 00:09:01.620 06:19:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:01.620 06:19:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:01.620 06:19:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:01.620 06:19:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:01.620 06:19:21 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:01.620 06:19:21 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:01.620 06:19:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.620 06:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:01.620 06:19:21 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:01.620 06:19:21 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:01.620 06:19:21 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:01.620 06:19:21 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:01.620 06:19:21 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:01.620 06:19:21 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:01.620 06:19:21 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:01.620 06:19:21 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:01.620 06:19:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.620 06:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:01.881 ************************************ 00:09:01.881 START TEST nvmf_tcp 00:09:01.881 ************************************ 00:09:01.881 06:19:21 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:01.881 * Looking for test storage... 00:09:01.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:01.881 06:19:21 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.881 06:19:22 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.881 06:19:22 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.881 06:19:22 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.881 06:19:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.882 06:19:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:01.882 06:19:22 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.882 06:19:22 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:01.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.882 --rc genhtml_branch_coverage=1 00:09:01.882 --rc genhtml_function_coverage=1 00:09:01.882 --rc genhtml_legend=1 00:09:01.882 --rc geninfo_all_blocks=1 00:09:01.882 --rc geninfo_unexecuted_blocks=1 00:09:01.882 00:09:01.882 ' 00:09:01.882 06:19:22 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:01.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.882 --rc genhtml_branch_coverage=1 00:09:01.882 --rc genhtml_function_coverage=1 00:09:01.882 --rc genhtml_legend=1 00:09:01.882 --rc geninfo_all_blocks=1 00:09:01.882 --rc geninfo_unexecuted_blocks=1 00:09:01.882 00:09:01.882 ' 00:09:01.882 06:19:22 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:01.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.882 --rc genhtml_branch_coverage=1 00:09:01.882 --rc genhtml_function_coverage=1 00:09:01.882 --rc genhtml_legend=1 00:09:01.882 --rc geninfo_all_blocks=1 00:09:01.882 --rc geninfo_unexecuted_blocks=1 00:09:01.882 00:09:01.882 ' 00:09:01.882 06:19:22 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:01.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.882 --rc genhtml_branch_coverage=1 00:09:01.882 --rc genhtml_function_coverage=1 00:09:01.882 --rc genhtml_legend=1 00:09:01.882 --rc geninfo_all_blocks=1 00:09:01.882 --rc geninfo_unexecuted_blocks=1 00:09:01.882 00:09:01.882 ' 00:09:01.882 06:19:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:01.882 06:19:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:01.882 06:19:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:01.882 06:19:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:01.882 06:19:22 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.882 06:19:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.882 ************************************ 00:09:01.882 START TEST nvmf_target_core 00:09:01.882 ************************************ 00:09:01.882 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:02.144 * Looking for test storage... 00:09:02.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:02.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.144 --rc genhtml_branch_coverage=1 00:09:02.144 --rc genhtml_function_coverage=1 00:09:02.144 --rc genhtml_legend=1 00:09:02.144 --rc geninfo_all_blocks=1 00:09:02.144 --rc geninfo_unexecuted_blocks=1 00:09:02.144 00:09:02.144 ' 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:02.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.144 --rc genhtml_branch_coverage=1 00:09:02.144 --rc genhtml_function_coverage=1 00:09:02.144 --rc genhtml_legend=1 00:09:02.144 --rc geninfo_all_blocks=1 00:09:02.144 --rc geninfo_unexecuted_blocks=1 00:09:02.144 00:09:02.144 ' 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:02.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.144 --rc genhtml_branch_coverage=1 00:09:02.144 --rc genhtml_function_coverage=1 00:09:02.144 --rc genhtml_legend=1 00:09:02.144 --rc geninfo_all_blocks=1 00:09:02.144 --rc geninfo_unexecuted_blocks=1 00:09:02.144 00:09:02.144 ' 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:02.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.144 --rc genhtml_branch_coverage=1 00:09:02.144 --rc genhtml_function_coverage=1 00:09:02.144 --rc genhtml_legend=1 00:09:02.144 --rc geninfo_all_blocks=1 00:09:02.144 --rc geninfo_unexecuted_blocks=1 00:09:02.144 00:09:02.144 ' 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:02.144 06:19:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:02.145 06:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 ************************************ 00:09:02.407 START TEST nvmf_abort 00:09:02.407 ************************************ 00:09:02.407 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:02.407 * Looking for test storage... 00:09:02.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:02.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.408 --rc genhtml_branch_coverage=1 00:09:02.408 --rc genhtml_function_coverage=1 00:09:02.408 --rc genhtml_legend=1 00:09:02.408 --rc geninfo_all_blocks=1 00:09:02.408 --rc geninfo_unexecuted_blocks=1 00:09:02.408 00:09:02.408 ' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:02.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.408 --rc genhtml_branch_coverage=1 00:09:02.408 --rc genhtml_function_coverage=1 00:09:02.408 --rc genhtml_legend=1 00:09:02.408 --rc geninfo_all_blocks=1 00:09:02.408 --rc geninfo_unexecuted_blocks=1 00:09:02.408 00:09:02.408 ' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:02.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.408 --rc genhtml_branch_coverage=1 00:09:02.408 --rc genhtml_function_coverage=1 00:09:02.408 --rc genhtml_legend=1 00:09:02.408 --rc geninfo_all_blocks=1 00:09:02.408 --rc geninfo_unexecuted_blocks=1 00:09:02.408 00:09:02.408 ' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:02.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.408 --rc genhtml_branch_coverage=1 00:09:02.408 --rc genhtml_function_coverage=1 00:09:02.408 --rc genhtml_legend=1 00:09:02.408 --rc geninfo_all_blocks=1 00:09:02.408 --rc geninfo_unexecuted_blocks=1 00:09:02.408 00:09:02.408 ' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:02.408 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:02.409 06:19:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:10.548 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.548 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:10.549 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:10.549 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:10.549 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.549 06:19:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:10.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:09:10.549 00:09:10.549 --- 10.0.0.2 ping statistics --- 00:09:10.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.549 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:09:10.549 00:09:10.549 --- 10.0.0.1 ping statistics --- 00:09:10.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.549 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2616990 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2616990 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2616990 ']' 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:10.549 06:19:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.549 [2024-11-20 06:19:30.256371] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:09:10.549 [2024-11-20 06:19:30.256442] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.549 [2024-11-20 06:19:30.355745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:10.549 [2024-11-20 06:19:30.408855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.549 [2024-11-20 06:19:30.408910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.549 [2024-11-20 06:19:30.408919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.549 [2024-11-20 06:19:30.408926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.549 [2024-11-20 06:19:30.408932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.549 [2024-11-20 06:19:30.410957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.549 [2024-11-20 06:19:30.411109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.549 [2024-11-20 06:19:30.411109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.811 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:10.811 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:09:10.811 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:10.811 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.811 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 [2024-11-20 06:19:31.127981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 Malloc0 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 Delay0 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 [2024-11-20 06:19:31.218419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 06:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:11.334 [2024-11-20 06:19:31.409228] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:13.883 Initializing NVMe Controllers 00:09:13.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:13.883 controller IO queue size 128 less than required 00:09:13.883 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:13.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:13.883 Initialization complete. Launching workers. 00:09:13.883 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28650 00:09:13.883 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28711, failed to submit 62 00:09:13.883 success 28654, unsuccessful 57, failed 0 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.883 rmmod nvme_tcp 00:09:13.883 rmmod nvme_fabrics 00:09:13.883 rmmod nvme_keyring 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2616990 ']' 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2616990 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2616990 ']' 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2616990 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2616990 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2616990' 00:09:13.883 killing process with pid 2616990 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2616990 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2616990 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.883 06:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.797 06:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.797 00:09:15.797 real 0m13.487s 00:09:15.797 user 0m14.388s 00:09:15.797 sys 0m6.624s 00:09:15.797 06:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:15.797 06:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.797 ************************************ 00:09:15.797 END TEST nvmf_abort 00:09:15.797 ************************************ 00:09:15.797 06:19:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:15.797 06:19:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:15.797 06:19:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:15.797 06:19:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.797 ************************************ 00:09:15.797 START TEST nvmf_ns_hotplug_stress 00:09:15.797 ************************************ 00:09:15.797 06:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:16.059 * Looking for test storage... 00:09:16.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:16.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.059 --rc genhtml_branch_coverage=1 00:09:16.059 --rc genhtml_function_coverage=1 00:09:16.059 --rc genhtml_legend=1 00:09:16.059 --rc geninfo_all_blocks=1 00:09:16.059 --rc geninfo_unexecuted_blocks=1 00:09:16.059 00:09:16.059 ' 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:16.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.059 --rc genhtml_branch_coverage=1 00:09:16.059 --rc genhtml_function_coverage=1 00:09:16.059 --rc genhtml_legend=1 00:09:16.059 --rc geninfo_all_blocks=1 00:09:16.059 --rc geninfo_unexecuted_blocks=1 00:09:16.059 00:09:16.059 ' 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:16.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.059 --rc genhtml_branch_coverage=1 00:09:16.059 --rc genhtml_function_coverage=1 00:09:16.059 --rc genhtml_legend=1 00:09:16.059 --rc geninfo_all_blocks=1 00:09:16.059 --rc geninfo_unexecuted_blocks=1 00:09:16.059 00:09:16.059 ' 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:16.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.059 --rc genhtml_branch_coverage=1 00:09:16.059 --rc genhtml_function_coverage=1 00:09:16.059 --rc genhtml_legend=1 00:09:16.059 --rc geninfo_all_blocks=1 00:09:16.059 --rc geninfo_unexecuted_blocks=1 00:09:16.059 00:09:16.059 ' 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.059 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.060 06:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.201 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:24.201 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:24.202 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:24.202 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:24.202 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:09:24.202 00:09:24.202 --- 10.0.0.2 ping statistics --- 00:09:24.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.202 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:09:24.202 00:09:24.202 --- 10.0.0.1 ping statistics --- 00:09:24.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.202 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2622038 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2622038 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2622038 ']' 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:24.202 06:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.202 [2024-11-20 06:19:43.840978] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:09:24.202 [2024-11-20 06:19:43.841040] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.203 [2024-11-20 06:19:43.940721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.203 [2024-11-20 06:19:43.991830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.203 [2024-11-20 06:19:43.991887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.203 [2024-11-20 06:19:43.991896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.203 [2024-11-20 06:19:43.991903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.203 [2024-11-20 06:19:43.991910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.203 [2024-11-20 06:19:43.993970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.203 [2024-11-20 06:19:43.994131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.203 [2024-11-20 06:19:43.994132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.463 06:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:24.463 06:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:09:24.463 06:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.463 06:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.463 06:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.463 06:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.463 06:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:24.463 06:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:24.724 [2024-11-20 06:19:44.879319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.724 06:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.985 06:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.247 [2024-11-20 06:19:45.278431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.247 06:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:25.247 06:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:25.508 Malloc0 00:09:25.508 06:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:25.769 Delay0 00:09:25.769 06:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.030 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:26.030 NULL1 00:09:26.030 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:26.292 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:26.292 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2622425 00:09:26.292 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:26.292 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.684 Read completed with error (sct=0, sc=11) 00:09:27.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.684 06:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.684 06:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:27.684 06:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:27.945 true 00:09:27.945 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:27.945 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.888 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.888 06:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:28.888 06:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:29.157 true 00:09:29.157 06:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:29.157 06:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.157 06:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.418 06:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:29.418 06:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:29.679 true 00:09:29.679 06:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:29.679 06:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.667 06:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.927 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:30.927 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:31.188 true 00:09:31.188 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:31.188 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.131 06:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.131 06:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:32.131 06:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:32.392 true 00:09:32.392 06:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:32.392 06:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.392 06:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.653 06:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:32.653 06:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:32.912 true 00:09:32.912 06:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:32.912 06:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.912 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.174 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:33.174 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:33.435 true 00:09:33.435 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:33.435 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.435 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.696 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:33.696 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:33.957 true 00:09:33.957 06:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:33.957 06:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.343 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.343 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:35.343 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:35.343 true 00:09:35.343 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:35.343 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.285 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.547 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:36.547 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:36.547 true 00:09:36.547 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:36.547 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.808 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.068 06:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:37.068 06:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:37.068 true 00:09:37.068 06:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:37.329 06:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.270 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.531 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:38.531 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:38.792 true 00:09:38.792 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:38.792 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.737 06:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.737 06:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:39.737 06:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:39.998 true 00:09:39.998 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:39.998 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.998 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.259 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:40.259 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:40.519 true 00:09:40.519 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:40.519 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.778 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.778 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:40.778 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:41.039 true 00:09:41.039 06:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:41.039 06:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.299 06:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.299 06:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:41.299 06:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:41.559 true 00:09:41.559 06:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:41.559 06:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.942 06:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.942 06:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:42.942 06:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:42.942 true 00:09:42.942 06:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:42.942 06:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.885 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.146 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:44.146 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:44.146 true 00:09:44.146 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:44.146 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.406 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.667 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:44.667 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:44.667 true 00:09:44.667 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:44.667 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.049 06:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.049 06:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:46.049 06:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:46.309 true 00:09:46.309 06:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:46.309 06:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.250 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.250 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:47.250 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:47.511 true 00:09:47.511 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:47.511 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.511 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.771 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:47.771 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:48.032 true 00:09:48.032 06:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:48.032 06:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.032 06:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.293 06:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:48.293 06:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:48.554 true 00:09:48.554 06:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:48.554 06:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.499 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.499 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:49.499 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:49.499 true 00:09:49.760 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:49.760 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.760 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.020 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:50.020 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:50.020 true 00:09:50.280 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:50.280 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.280 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.539 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:50.539 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:50.799 true 00:09:50.799 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:50.799 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.799 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.059 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:51.059 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:51.320 true 00:09:51.320 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:51.320 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.320 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.580 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:51.580 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:51.841 true 00:09:51.841 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:51.841 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.841 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.102 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:52.102 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:52.363 true 00:09:52.363 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:52.363 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.624 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.624 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:52.624 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:52.885 true 00:09:52.885 06:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:52.885 06:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.827 06:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.827 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:53.827 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:54.088 true 00:09:54.088 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:54.088 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.349 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.349 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:54.349 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:54.610 true 00:09:54.610 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:54.610 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.871 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.871 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:54.871 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:55.149 true 00:09:55.149 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:55.149 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.091 06:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.091 06:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:56.091 06:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:56.351 true 00:09:56.351 06:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:56.351 06:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.614 06:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.614 Initializing NVMe Controllers 00:09:56.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:56.614 Controller IO queue size 128, less than required. 00:09:56.614 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:56.614 Controller IO queue size 128, less than required. 00:09:56.614 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:56.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:56.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:56.614 Initialization complete. Launching workers. 00:09:56.614 ======================================================== 00:09:56.614 Latency(us) 00:09:56.614 Device Information : IOPS MiB/s Average min max 00:09:56.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2350.28 1.15 31586.64 1304.32 1016115.79 00:09:56.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17365.83 8.48 7370.69 1121.39 407147.83 00:09:56.614 ======================================================== 00:09:56.614 Total : 19716.10 9.63 10257.38 1121.39 1016115.79 00:09:56.614 00:09:56.614 06:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:56.614 06:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:56.875 true 00:09:56.875 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2622425 00:09:56.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2622425) - No such process 00:09:56.875 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2622425 00:09:56.875 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.136 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:57.136 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:57.136 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:57.136 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:57.136 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:57.136 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:57.397 null0 00:09:57.397 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:57.397 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:57.397 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:57.658 null1 00:09:57.658 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:57.658 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:57.658 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:57.658 null2 00:09:57.919 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:57.919 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:57.919 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:57.919 null3 00:09:57.919 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:57.919 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:57.919 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:58.180 null4 00:09:58.180 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:58.180 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:58.180 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:58.441 null5 00:09:58.441 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:58.441 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:58.441 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:58.441 null6 00:09:58.441 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:58.441 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:58.441 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:58.702 null7 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2629128 2629130 2629133 2629136 2629139 2629142 2629144 2629147 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.702 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.964 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:59.225 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.486 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.747 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:59.748 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.748 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.748 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:59.748 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.748 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.748 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:59.748 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.748 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.748 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:00.008 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:00.334 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.334 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:00.335 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.619 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:00.880 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.880 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:00.880 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:00.880 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:00.880 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.880 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.880 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.880 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.140 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:01.402 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.664 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.926 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.926 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:01.926 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.926 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.926 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:01.926 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.186 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.186 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.186 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.187 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.448 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.449 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.709 rmmod nvme_tcp 00:10:02.709 rmmod nvme_fabrics 00:10:02.709 rmmod nvme_keyring 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2622038 ']' 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2622038 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2622038 ']' 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2622038 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2622038 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2622038' 00:10:02.709 killing process with pid 2622038 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2622038 00:10:02.709 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2622038 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.970 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.881 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.881 00:10:04.881 real 0m49.142s 00:10:04.881 user 3m12.486s 00:10:04.881 sys 0m16.282s 00:10:04.881 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.881 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.881 ************************************ 00:10:04.881 END TEST nvmf_ns_hotplug_stress 00:10:04.881 ************************************ 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.143 ************************************ 00:10:05.143 START TEST nvmf_delete_subsystem 00:10:05.143 ************************************ 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:05.143 * Looking for test storage... 00:10:05.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:05.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.143 --rc genhtml_branch_coverage=1 00:10:05.143 --rc genhtml_function_coverage=1 00:10:05.143 --rc genhtml_legend=1 00:10:05.143 --rc geninfo_all_blocks=1 00:10:05.143 --rc geninfo_unexecuted_blocks=1 00:10:05.143 00:10:05.143 ' 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.143 --rc genhtml_branch_coverage=1 00:10:05.143 --rc genhtml_function_coverage=1 00:10:05.143 --rc genhtml_legend=1 00:10:05.143 --rc geninfo_all_blocks=1 00:10:05.143 --rc geninfo_unexecuted_blocks=1 00:10:05.143 00:10:05.143 ' 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.143 --rc genhtml_branch_coverage=1 00:10:05.143 --rc genhtml_function_coverage=1 00:10:05.143 --rc genhtml_legend=1 00:10:05.143 --rc geninfo_all_blocks=1 00:10:05.143 --rc geninfo_unexecuted_blocks=1 00:10:05.143 00:10:05.143 ' 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.143 --rc genhtml_branch_coverage=1 00:10:05.143 --rc genhtml_function_coverage=1 00:10:05.143 --rc genhtml_legend=1 00:10:05.143 --rc geninfo_all_blocks=1 00:10:05.143 --rc geninfo_unexecuted_blocks=1 00:10:05.143 00:10:05.143 ' 00:10:05.143 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.405 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.406 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:13.545 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:13.545 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:13.545 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.545 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:13.546 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:10:13.546 00:10:13.546 --- 10.0.0.2 ping statistics --- 00:10:13.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.546 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:10:13.546 00:10:13.546 --- 10.0.0.1 ping statistics --- 00:10:13.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.546 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2634410 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2634410 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2634410 ']' 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:13.546 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 [2024-11-20 06:20:32.852393] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:10:13.546 [2024-11-20 06:20:32.852457] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.546 [2024-11-20 06:20:32.954265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.546 [2024-11-20 06:20:33.005688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.546 [2024-11-20 06:20:33.005746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.546 [2024-11-20 06:20:33.005754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.546 [2024-11-20 06:20:33.005762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.546 [2024-11-20 06:20:33.005768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.546 [2024-11-20 06:20:33.007577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.546 [2024-11-20 06:20:33.007579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 [2024-11-20 06:20:33.731714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 [2024-11-20 06:20:33.756051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.546 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 NULL1 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.547 Delay0 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2634456 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:13.547 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:13.809 [2024-11-20 06:20:33.883114] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:15.723 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.723 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.723 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 [2024-11-20 06:20:36.053725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a44a0 is same with the state(6) to be set 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Write completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 starting I/O failed: -6 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.984 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 starting I/O failed: -6 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 starting I/O failed: -6 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 starting I/O failed: -6 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 starting I/O failed: -6 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 starting I/O failed: -6 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 starting I/O failed: -6 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 starting I/O failed: -6 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 starting I/O failed: -6 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Write completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 Read completed with error (sct=0, sc=8) 00:10:15.985 starting I/O failed: -6 00:10:15.985 starting I/O failed: -6 00:10:15.985 starting I/O failed: -6 00:10:15.985 starting I/O failed: -6 00:10:16.927 [2024-11-20 06:20:37.022618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a59a0 is same with the state(6) to be set 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 [2024-11-20 06:20:37.053372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a4680 is same with the state(6) to be set 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Read completed with error (sct=0, sc=8) 00:10:16.927 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 [2024-11-20 06:20:37.055748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2bcc00d020 is same with the state(6) to be set 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 [2024-11-20 06:20:37.056162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2bcc00d7c0 is same with the state(6) to be set 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Read completed with error (sct=0, sc=8) 00:10:16.928 Write completed with error (sct=0, sc=8) 00:10:16.928 [2024-11-20 06:20:37.056266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2bcc000c40 is same with the state(6) to be set 00:10:16.928 Initializing NVMe Controllers 00:10:16.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:16.928 Controller IO queue size 128, less than required. 00:10:16.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:16.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:16.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:16.928 Initialization complete. Launching workers. 00:10:16.928 ======================================================== 00:10:16.928 Latency(us) 00:10:16.928 Device Information : IOPS MiB/s Average min max 00:10:16.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 152.99 0.07 914337.51 307.82 2000464.06 00:10:16.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 179.81 0.09 1015343.70 485.33 2002904.98 00:10:16.928 ======================================================== 00:10:16.928 Total : 332.79 0.16 968911.00 307.82 2002904.98 00:10:16.928 00:10:16.928 [2024-11-20 06:20:37.056792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a59a0 (9): Bad file descriptor 00:10:16.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:16.928 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.928 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:16.928 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2634456 00:10:16.928 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2634456 00:10:17.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2634456) - No such process 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2634456 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2634456 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2634456 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 [2024-11-20 06:20:37.586693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2635309 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2635309 00:10:17.500 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:17.500 [2024-11-20 06:20:37.685826] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:18.070 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:18.070 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2635309 00:10:18.070 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:18.640 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:18.640 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2635309 00:10:18.640 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:18.899 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:18.899 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2635309 00:10:18.899 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:19.469 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:19.469 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2635309 00:10:19.469 06:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:20.041 06:20:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:20.041 06:20:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2635309 00:10:20.041 06:20:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:20.611 06:20:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:20.611 06:20:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2635309 00:10:20.611 06:20:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:20.611 Initializing NVMe Controllers 00:10:20.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:20.611 Controller IO queue size 128, less than required. 00:10:20.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:20.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:20.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:20.611 Initialization complete. Launching workers. 00:10:20.611 ======================================================== 00:10:20.611 Latency(us) 00:10:20.611 Device Information : IOPS MiB/s Average min max 00:10:20.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002533.95 1000281.90 1005497.40 00:10:20.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003379.59 1000250.17 1007959.03 00:10:20.611 ======================================================== 00:10:20.611 Total : 256.00 0.12 1002956.77 1000250.17 1007959.03 00:10:20.611 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2635309 00:10:20.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2635309) - No such process 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2635309 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.872 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.132 rmmod nvme_tcp 00:10:21.132 rmmod nvme_fabrics 00:10:21.132 rmmod nvme_keyring 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2634410 ']' 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2634410 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2634410 ']' 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2634410 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2634410 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2634410' 00:10:21.132 killing process with pid 2634410 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2634410 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2634410 00:10:21.132 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.133 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:23.679 00:10:23.679 real 0m18.247s 00:10:23.679 user 0m30.820s 00:10:23.679 sys 0m6.710s 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:23.679 ************************************ 00:10:23.679 END TEST nvmf_delete_subsystem 00:10:23.679 ************************************ 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.679 ************************************ 00:10:23.679 START TEST nvmf_host_management 00:10:23.679 ************************************ 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:23.679 * Looking for test storage... 00:10:23.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:23.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.679 --rc genhtml_branch_coverage=1 00:10:23.679 --rc genhtml_function_coverage=1 00:10:23.679 --rc genhtml_legend=1 00:10:23.679 --rc geninfo_all_blocks=1 00:10:23.679 --rc geninfo_unexecuted_blocks=1 00:10:23.679 00:10:23.679 ' 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:23.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.679 --rc genhtml_branch_coverage=1 00:10:23.679 --rc genhtml_function_coverage=1 00:10:23.679 --rc genhtml_legend=1 00:10:23.679 --rc geninfo_all_blocks=1 00:10:23.679 --rc geninfo_unexecuted_blocks=1 00:10:23.679 00:10:23.679 ' 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:23.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.679 --rc genhtml_branch_coverage=1 00:10:23.679 --rc genhtml_function_coverage=1 00:10:23.679 --rc genhtml_legend=1 00:10:23.679 --rc geninfo_all_blocks=1 00:10:23.679 --rc geninfo_unexecuted_blocks=1 00:10:23.679 00:10:23.679 ' 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:23.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.679 --rc genhtml_branch_coverage=1 00:10:23.679 --rc genhtml_function_coverage=1 00:10:23.679 --rc genhtml_legend=1 00:10:23.679 --rc geninfo_all_blocks=1 00:10:23.679 --rc geninfo_unexecuted_blocks=1 00:10:23.679 00:10:23.679 ' 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.679 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.680 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.819 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:31.820 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:31.820 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:31.820 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:31.820 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.820 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:10:31.820 00:10:31.820 --- 10.0.0.2 ping statistics --- 00:10:31.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.820 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:10:31.820 00:10:31.820 --- 10.0.0.1 ping statistics --- 00:10:31.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.820 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2640278 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2640278 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:31.820 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2640278 ']' 00:10:31.821 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.821 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.821 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.821 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.821 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:31.821 [2024-11-20 06:20:51.393523] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:10:31.821 [2024-11-20 06:20:51.393591] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.821 [2024-11-20 06:20:51.493768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.821 [2024-11-20 06:20:51.546661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.821 [2024-11-20 06:20:51.546715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.821 [2024-11-20 06:20:51.546723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.821 [2024-11-20 06:20:51.546730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.821 [2024-11-20 06:20:51.546737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.821 [2024-11-20 06:20:51.548770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.821 [2024-11-20 06:20:51.548934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.821 [2024-11-20 06:20:51.549095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:31.821 [2024-11-20 06:20:51.549096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:32.253 [2024-11-20 06:20:52.271321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:32.253 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:32.254 Malloc0 00:10:32.254 [2024-11-20 06:20:52.350347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2640521 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2640521 /var/tmp/bdevperf.sock 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2640521 ']' 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:32.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:32.254 { 00:10:32.254 "params": { 00:10:32.254 "name": "Nvme$subsystem", 00:10:32.254 "trtype": "$TEST_TRANSPORT", 00:10:32.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:32.254 "adrfam": "ipv4", 00:10:32.254 "trsvcid": "$NVMF_PORT", 00:10:32.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:32.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:32.254 "hdgst": ${hdgst:-false}, 00:10:32.254 "ddgst": ${ddgst:-false} 00:10:32.254 }, 00:10:32.254 "method": "bdev_nvme_attach_controller" 00:10:32.254 } 00:10:32.254 EOF 00:10:32.254 )") 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:32.254 06:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:32.254 "params": { 00:10:32.254 "name": "Nvme0", 00:10:32.254 "trtype": "tcp", 00:10:32.254 "traddr": "10.0.0.2", 00:10:32.254 "adrfam": "ipv4", 00:10:32.254 "trsvcid": "4420", 00:10:32.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:32.254 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:32.254 "hdgst": false, 00:10:32.254 "ddgst": false 00:10:32.254 }, 00:10:32.254 "method": "bdev_nvme_attach_controller" 00:10:32.254 }' 00:10:32.254 [2024-11-20 06:20:52.460114] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:10:32.254 [2024-11-20 06:20:52.460192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640521 ] 00:10:32.542 [2024-11-20 06:20:52.553649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.542 [2024-11-20 06:20:52.606660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.542 Running I/O for 10 seconds... 00:10:33.115 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:33.115 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:33.115 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:33.115 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=704 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 704 -ge 100 ']' 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:33.116 [2024-11-20 06:20:53.362093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.362264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b150 is same with the state(6) to be set 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.116 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:33.116 [2024-11-20 06:20:53.373404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.116 [2024-11-20 06:20:53.373464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.373477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.116 [2024-11-20 06:20:53.373485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.373494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.116 [2024-11-20 06:20:53.373502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.373510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.116 [2024-11-20 06:20:53.373518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.373526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68f000 is same with the state(6) to be set 00:10:33.116 [2024-11-20 06:20:53.374455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.116 [2024-11-20 06:20:53.374773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.116 [2024-11-20 06:20:53.374782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.374988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.374995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.117 [2024-11-20 06:20:53.375458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.117 [2024-11-20 06:20:53.375468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.118 [2024-11-20 06:20:53.375475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.118 [2024-11-20 06:20:53.375484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.118 [2024-11-20 06:20:53.375492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.118 [2024-11-20 06:20:53.375501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.118 [2024-11-20 06:20:53.375509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.118 [2024-11-20 06:20:53.375519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.118 [2024-11-20 06:20:53.375526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.118 [2024-11-20 06:20:53.375535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.118 [2024-11-20 06:20:53.375543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.118 [2024-11-20 06:20:53.375552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.118 [2024-11-20 06:20:53.375560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.118 [2024-11-20 06:20:53.375569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.118 [2024-11-20 06:20:53.375577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.118 [2024-11-20 06:20:53.375586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:33.118 [2024-11-20 06:20:53.375594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.118 [2024-11-20 06:20:53.376869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:33.118 task offset: 98304 on job bdev=Nvme0n1 fails 00:10:33.118 00:10:33.118 Latency(us) 00:10:33.118 [2024-11-20T05:20:53.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.118 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:33.118 Job: Nvme0n1 ended in about 0.56 seconds with error 00:10:33.118 Verification LBA range: start 0x0 length 0x400 00:10:33.118 Nvme0n1 : 0.56 1377.81 86.11 114.82 0.00 41844.04 1652.05 36700.16 00:10:33.118 [2024-11-20T05:20:53.397Z] =================================================================================================================== 00:10:33.118 [2024-11-20T05:20:53.397Z] Total : 1377.81 86.11 114.82 0.00 41844.04 1652.05 36700.16 00:10:33.118 [2024-11-20 06:20:53.379092] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:33.118 [2024-11-20 06:20:53.379129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f000 (9): B 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.118 ad file descriptor 00:10:33.118 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:33.379 [2024-11-20 06:20:53.426503] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2640521 00:10:34.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2640521) - No such process 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:34.320 { 00:10:34.320 "params": { 00:10:34.320 "name": "Nvme$subsystem", 00:10:34.320 "trtype": "$TEST_TRANSPORT", 00:10:34.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:34.320 "adrfam": "ipv4", 00:10:34.320 "trsvcid": "$NVMF_PORT", 00:10:34.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:34.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:34.320 "hdgst": ${hdgst:-false}, 00:10:34.320 "ddgst": ${ddgst:-false} 00:10:34.320 }, 00:10:34.320 "method": "bdev_nvme_attach_controller" 00:10:34.320 } 00:10:34.320 EOF 00:10:34.320 )") 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:34.320 06:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:34.320 "params": { 00:10:34.320 "name": "Nvme0", 00:10:34.320 "trtype": "tcp", 00:10:34.320 "traddr": "10.0.0.2", 00:10:34.320 "adrfam": "ipv4", 00:10:34.320 "trsvcid": "4420", 00:10:34.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:34.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:34.320 "hdgst": false, 00:10:34.320 "ddgst": false 00:10:34.320 }, 00:10:34.320 "method": "bdev_nvme_attach_controller" 00:10:34.320 }' 00:10:34.320 [2024-11-20 06:20:54.438008] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:10:34.320 [2024-11-20 06:20:54.438061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640883 ] 00:10:34.320 [2024-11-20 06:20:54.525413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.320 [2024-11-20 06:20:54.560877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.888 Running I/O for 1 seconds... 00:10:35.828 1661.00 IOPS, 103.81 MiB/s 00:10:35.828 Latency(us) 00:10:35.828 [2024-11-20T05:20:56.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.828 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:35.828 Verification LBA range: start 0x0 length 0x400 00:10:35.828 Nvme0n1 : 1.04 1659.43 103.71 0.00 0.00 37907.13 6307.84 32549.55 00:10:35.828 [2024-11-20T05:20:56.107Z] =================================================================================================================== 00:10:35.828 [2024-11-20T05:20:56.107Z] Total : 1659.43 103.71 0.00 0.00 37907.13 6307.84 32549.55 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.828 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.828 rmmod nvme_tcp 00:10:35.828 rmmod nvme_fabrics 00:10:35.828 rmmod nvme_keyring 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2640278 ']' 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2640278 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2640278 ']' 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2640278 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2640278 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2640278' 00:10:36.089 killing process with pid 2640278 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2640278 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2640278 00:10:36.089 [2024-11-20 06:20:56.276203] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.089 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:38.635 00:10:38.635 real 0m14.835s 00:10:38.635 user 0m23.823s 00:10:38.635 sys 0m6.873s 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:38.635 ************************************ 00:10:38.635 END TEST nvmf_host_management 00:10:38.635 ************************************ 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.635 ************************************ 00:10:38.635 START TEST nvmf_lvol 00:10:38.635 ************************************ 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:38.635 * Looking for test storage... 00:10:38.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:38.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.635 --rc genhtml_branch_coverage=1 00:10:38.635 --rc genhtml_function_coverage=1 00:10:38.635 --rc genhtml_legend=1 00:10:38.635 --rc geninfo_all_blocks=1 00:10:38.635 --rc geninfo_unexecuted_blocks=1 00:10:38.635 00:10:38.635 ' 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:38.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.635 --rc genhtml_branch_coverage=1 00:10:38.635 --rc genhtml_function_coverage=1 00:10:38.635 --rc genhtml_legend=1 00:10:38.635 --rc geninfo_all_blocks=1 00:10:38.635 --rc geninfo_unexecuted_blocks=1 00:10:38.635 00:10:38.635 ' 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:38.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.635 --rc genhtml_branch_coverage=1 00:10:38.635 --rc genhtml_function_coverage=1 00:10:38.635 --rc genhtml_legend=1 00:10:38.635 --rc geninfo_all_blocks=1 00:10:38.635 --rc geninfo_unexecuted_blocks=1 00:10:38.635 00:10:38.635 ' 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:38.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.635 --rc genhtml_branch_coverage=1 00:10:38.635 --rc genhtml_function_coverage=1 00:10:38.635 --rc genhtml_legend=1 00:10:38.635 --rc geninfo_all_blocks=1 00:10:38.635 --rc geninfo_unexecuted_blocks=1 00:10:38.635 00:10:38.635 ' 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.635 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.636 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:46.782 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:46.782 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:46.782 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:46.782 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.782 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.782 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.782 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.782 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.782 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:10:46.783 00:10:46.783 --- 10.0.0.2 ping statistics --- 00:10:46.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.783 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:10:46.783 00:10:46.783 --- 10.0.0.1 ping statistics --- 00:10:46.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.783 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2645673 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2645673 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2645673 ']' 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:46.783 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:46.783 [2024-11-20 06:21:06.300484] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:10:46.783 [2024-11-20 06:21:06.300550] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.783 [2024-11-20 06:21:06.398349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:46.783 [2024-11-20 06:21:06.451361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.783 [2024-11-20 06:21:06.451413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.783 [2024-11-20 06:21:06.451422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.783 [2024-11-20 06:21:06.451429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.783 [2024-11-20 06:21:06.451435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.783 [2024-11-20 06:21:06.453210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.783 [2024-11-20 06:21:06.453303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.783 [2024-11-20 06:21:06.453304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.045 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:47.045 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:10:47.045 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.045 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:47.045 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:47.045 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.045 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:47.306 [2024-11-20 06:21:07.334842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.306 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.567 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:47.567 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.567 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:47.567 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:47.828 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:48.092 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7765a0ba-dec3-43e9-bcdc-179c077c3f40 00:10:48.092 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7765a0ba-dec3-43e9-bcdc-179c077c3f40 lvol 20 00:10:48.355 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=24eaf60f-9e8a-45e8-87e8-c251df7a183c 00:10:48.355 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:48.355 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 24eaf60f-9e8a-45e8-87e8-c251df7a183c 00:10:48.616 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:48.876 [2024-11-20 06:21:08.904740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.876 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:48.876 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2646367 00:10:48.876 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:48.876 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:50.259 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 24eaf60f-9e8a-45e8-87e8-c251df7a183c MY_SNAPSHOT 00:10:50.260 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a77366cf-6dcd-416b-873f-f030f8ab843f 00:10:50.260 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 24eaf60f-9e8a-45e8-87e8-c251df7a183c 30 00:10:50.520 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a77366cf-6dcd-416b-873f-f030f8ab843f MY_CLONE 00:10:50.520 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=342a0cdf-afce-4dea-890b-80caa3e97f3f 00:10:50.520 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 342a0cdf-afce-4dea-890b-80caa3e97f3f 00:10:51.090 06:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2646367 00:11:01.085 Initializing NVMe Controllers 00:11:01.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:01.085 Controller IO queue size 128, less than required. 00:11:01.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:01.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:01.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:01.085 Initialization complete. Launching workers. 00:11:01.085 ======================================================== 00:11:01.085 Latency(us) 00:11:01.085 Device Information : IOPS MiB/s Average min max 00:11:01.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16093.90 62.87 7955.17 1638.55 64139.03 00:11:01.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17407.50 68.00 7353.53 1112.06 55605.62 00:11:01.085 ======================================================== 00:11:01.085 Total : 33501.40 130.86 7642.55 1112.06 64139.03 00:11:01.085 00:11:01.085 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:01.085 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 24eaf60f-9e8a-45e8-87e8-c251df7a183c 00:11:01.085 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7765a0ba-dec3-43e9-bcdc-179c077c3f40 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.085 rmmod nvme_tcp 00:11:01.085 rmmod nvme_fabrics 00:11:01.085 rmmod nvme_keyring 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2645673 ']' 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2645673 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2645673 ']' 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2645673 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2645673 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2645673' 00:11:01.085 killing process with pid 2645673 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2645673 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2645673 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.085 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.469 00:11:02.469 real 0m23.968s 00:11:02.469 user 1m4.746s 00:11:02.469 sys 0m8.705s 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:02.469 ************************************ 00:11:02.469 END TEST nvmf_lvol 00:11:02.469 ************************************ 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.469 ************************************ 00:11:02.469 START TEST nvmf_lvs_grow 00:11:02.469 ************************************ 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:02.469 * Looking for test storage... 00:11:02.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:02.469 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:02.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.470 --rc genhtml_branch_coverage=1 00:11:02.470 --rc genhtml_function_coverage=1 00:11:02.470 --rc genhtml_legend=1 00:11:02.470 --rc geninfo_all_blocks=1 00:11:02.470 --rc geninfo_unexecuted_blocks=1 00:11:02.470 00:11:02.470 ' 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:02.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.470 --rc genhtml_branch_coverage=1 00:11:02.470 --rc genhtml_function_coverage=1 00:11:02.470 --rc genhtml_legend=1 00:11:02.470 --rc geninfo_all_blocks=1 00:11:02.470 --rc geninfo_unexecuted_blocks=1 00:11:02.470 00:11:02.470 ' 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:02.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.470 --rc genhtml_branch_coverage=1 00:11:02.470 --rc genhtml_function_coverage=1 00:11:02.470 --rc genhtml_legend=1 00:11:02.470 --rc geninfo_all_blocks=1 00:11:02.470 --rc geninfo_unexecuted_blocks=1 00:11:02.470 00:11:02.470 ' 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:02.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.470 --rc genhtml_branch_coverage=1 00:11:02.470 --rc genhtml_function_coverage=1 00:11:02.470 --rc genhtml_legend=1 00:11:02.470 --rc geninfo_all_blocks=1 00:11:02.470 --rc geninfo_unexecuted_blocks=1 00:11:02.470 00:11:02.470 ' 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.470 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.731 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:10.908 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:10.908 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:10.908 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:10.908 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.908 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.908 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.908 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.908 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:10.908 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.908 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.908 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.908 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:10.908 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:10.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:11:10.908 00:11:10.908 --- 10.0.0.2 ping statistics --- 00:11:10.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.909 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:11:10.909 00:11:10.909 --- 10.0.0.1 ping statistics --- 00:11:10.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.909 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2653204 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2653204 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2653204 ']' 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:10.909 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:10.909 [2024-11-20 06:21:30.280624] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:11:10.909 [2024-11-20 06:21:30.280686] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.909 [2024-11-20 06:21:30.379834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.909 [2024-11-20 06:21:30.431317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.909 [2024-11-20 06:21:30.431371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.909 [2024-11-20 06:21:30.431380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.909 [2024-11-20 06:21:30.431388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.909 [2024-11-20 06:21:30.431394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.909 [2024-11-20 06:21:30.432186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.909 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:10.909 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:11:10.909 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:10.909 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:10.909 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:10.909 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.909 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:11.170 [2024-11-20 06:21:31.319958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.171 ************************************ 00:11:11.171 START TEST lvs_grow_clean 00:11:11.171 ************************************ 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:11.171 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:11.432 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:11.432 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:11.693 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:11.693 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:11.693 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:11.954 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:11.954 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:11.954 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 lvol 150 00:11:11.954 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e8260d89-d403-4ea6-aa40-9f416955ac83 00:11:11.954 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:11.954 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:12.214 [2024-11-20 06:21:32.355957] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:12.214 [2024-11-20 06:21:32.356033] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:12.214 true 00:11:12.214 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:12.214 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:12.475 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:12.475 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:12.736 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e8260d89-d403-4ea6-aa40-9f416955ac83 00:11:12.736 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:12.997 [2024-11-20 06:21:33.134460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.997 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:13.259 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2653915 00:11:13.259 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:13.259 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:13.259 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2653915 /var/tmp/bdevperf.sock 00:11:13.259 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2653915 ']' 00:11:13.259 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:13.259 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:13.259 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:13.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:13.259 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:13.259 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:13.259 [2024-11-20 06:21:33.386089] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:11:13.259 [2024-11-20 06:21:33.386170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653915 ] 00:11:13.259 [2024-11-20 06:21:33.476333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.259 [2024-11-20 06:21:33.528303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.202 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:14.202 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:11:14.202 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:14.202 Nvme0n1 00:11:14.202 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:14.464 [ 00:11:14.464 { 00:11:14.464 "name": "Nvme0n1", 00:11:14.464 "aliases": [ 00:11:14.464 "e8260d89-d403-4ea6-aa40-9f416955ac83" 00:11:14.464 ], 00:11:14.464 "product_name": "NVMe disk", 00:11:14.464 "block_size": 4096, 00:11:14.464 "num_blocks": 38912, 00:11:14.464 "uuid": "e8260d89-d403-4ea6-aa40-9f416955ac83", 00:11:14.464 "numa_id": 0, 00:11:14.464 "assigned_rate_limits": { 00:11:14.464 "rw_ios_per_sec": 0, 00:11:14.464 "rw_mbytes_per_sec": 0, 00:11:14.464 "r_mbytes_per_sec": 0, 00:11:14.464 "w_mbytes_per_sec": 0 00:11:14.464 }, 00:11:14.464 "claimed": false, 00:11:14.464 "zoned": false, 00:11:14.464 "supported_io_types": { 00:11:14.464 "read": true, 00:11:14.464 "write": true, 00:11:14.464 "unmap": true, 00:11:14.464 "flush": true, 00:11:14.464 "reset": true, 00:11:14.464 "nvme_admin": true, 00:11:14.464 "nvme_io": true, 00:11:14.464 "nvme_io_md": false, 00:11:14.464 "write_zeroes": true, 00:11:14.464 "zcopy": false, 00:11:14.464 "get_zone_info": false, 00:11:14.464 "zone_management": false, 00:11:14.464 "zone_append": false, 00:11:14.464 "compare": true, 00:11:14.464 "compare_and_write": true, 00:11:14.464 "abort": true, 00:11:14.464 "seek_hole": false, 00:11:14.464 "seek_data": false, 00:11:14.464 "copy": true, 00:11:14.464 "nvme_iov_md": false 00:11:14.464 }, 00:11:14.464 "memory_domains": [ 00:11:14.464 { 00:11:14.464 "dma_device_id": "system", 00:11:14.464 "dma_device_type": 1 00:11:14.464 } 00:11:14.464 ], 00:11:14.464 "driver_specific": { 00:11:14.464 "nvme": [ 00:11:14.464 { 00:11:14.464 "trid": { 00:11:14.464 "trtype": "TCP", 00:11:14.464 "adrfam": "IPv4", 00:11:14.464 "traddr": "10.0.0.2", 00:11:14.464 "trsvcid": "4420", 00:11:14.464 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:14.464 }, 00:11:14.464 "ctrlr_data": { 00:11:14.464 "cntlid": 1, 00:11:14.464 "vendor_id": "0x8086", 00:11:14.464 "model_number": "SPDK bdev Controller", 00:11:14.464 "serial_number": "SPDK0", 00:11:14.464 "firmware_revision": "25.01", 00:11:14.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:14.464 "oacs": { 00:11:14.464 "security": 0, 00:11:14.464 "format": 0, 00:11:14.464 "firmware": 0, 00:11:14.464 "ns_manage": 0 00:11:14.464 }, 00:11:14.464 "multi_ctrlr": true, 00:11:14.464 "ana_reporting": false 00:11:14.464 }, 00:11:14.464 "vs": { 00:11:14.464 "nvme_version": "1.3" 00:11:14.464 }, 00:11:14.464 "ns_data": { 00:11:14.464 "id": 1, 00:11:14.464 "can_share": true 00:11:14.464 } 00:11:14.464 } 00:11:14.464 ], 00:11:14.464 "mp_policy": "active_passive" 00:11:14.464 } 00:11:14.464 } 00:11:14.464 ] 00:11:14.464 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:14.464 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2654111 00:11:14.464 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:14.464 Running I/O for 10 seconds... 00:11:15.853 Latency(us) 00:11:15.853 [2024-11-20T05:21:36.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:15.853 Nvme0n1 : 1.00 25100.00 98.05 0.00 0.00 0.00 0.00 0.00 00:11:15.853 [2024-11-20T05:21:36.132Z] =================================================================================================================== 00:11:15.853 [2024-11-20T05:21:36.132Z] Total : 25100.00 98.05 0.00 0.00 0.00 0.00 0.00 00:11:15.853 00:11:16.424 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:16.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.685 Nvme0n1 : 2.00 25206.50 98.46 0.00 0.00 0.00 0.00 0.00 00:11:16.685 [2024-11-20T05:21:36.964Z] =================================================================================================================== 00:11:16.685 [2024-11-20T05:21:36.964Z] Total : 25206.50 98.46 0.00 0.00 0.00 0.00 0.00 00:11:16.685 00:11:16.685 true 00:11:16.685 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:16.685 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:16.945 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:16.945 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:16.945 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2654111 00:11:17.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.515 Nvme0n1 : 3.00 25262.67 98.68 0.00 0.00 0.00 0.00 0.00 00:11:17.515 [2024-11-20T05:21:37.794Z] =================================================================================================================== 00:11:17.515 [2024-11-20T05:21:37.794Z] Total : 25262.67 98.68 0.00 0.00 0.00 0.00 0.00 00:11:17.515 00:11:18.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.454 Nvme0n1 : 4.00 25299.00 98.82 0.00 0.00 0.00 0.00 0.00 00:11:18.454 [2024-11-20T05:21:38.733Z] =================================================================================================================== 00:11:18.454 [2024-11-20T05:21:38.733Z] Total : 25299.00 98.82 0.00 0.00 0.00 0.00 0.00 00:11:18.454 00:11:19.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.836 Nvme0n1 : 5.00 25333.40 98.96 0.00 0.00 0.00 0.00 0.00 00:11:19.836 [2024-11-20T05:21:40.115Z] =================================================================================================================== 00:11:19.836 [2024-11-20T05:21:40.115Z] Total : 25333.40 98.96 0.00 0.00 0.00 0.00 0.00 00:11:19.836 00:11:20.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.777 Nvme0n1 : 6.00 25356.33 99.05 0.00 0.00 0.00 0.00 0.00 00:11:20.777 [2024-11-20T05:21:41.056Z] =================================================================================================================== 00:11:20.777 [2024-11-20T05:21:41.056Z] Total : 25356.33 99.05 0.00 0.00 0.00 0.00 0.00 00:11:20.777 00:11:21.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:21.718 Nvme0n1 : 7.00 25372.86 99.11 0.00 0.00 0.00 0.00 0.00 00:11:21.718 [2024-11-20T05:21:41.997Z] =================================================================================================================== 00:11:21.718 [2024-11-20T05:21:41.997Z] Total : 25372.86 99.11 0.00 0.00 0.00 0.00 0.00 00:11:21.718 00:11:22.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.660 Nvme0n1 : 8.00 25393.00 99.19 0.00 0.00 0.00 0.00 0.00 00:11:22.660 [2024-11-20T05:21:42.939Z] =================================================================================================================== 00:11:22.660 [2024-11-20T05:21:42.939Z] Total : 25393.00 99.19 0.00 0.00 0.00 0.00 0.00 00:11:22.660 00:11:23.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.602 Nvme0n1 : 9.00 25401.89 99.23 0.00 0.00 0.00 0.00 0.00 00:11:23.602 [2024-11-20T05:21:43.881Z] =================================================================================================================== 00:11:23.602 [2024-11-20T05:21:43.881Z] Total : 25401.89 99.23 0.00 0.00 0.00 0.00 0.00 00:11:23.602 00:11:24.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.546 Nvme0n1 : 10.00 25415.10 99.28 0.00 0.00 0.00 0.00 0.00 00:11:24.546 [2024-11-20T05:21:44.825Z] =================================================================================================================== 00:11:24.546 [2024-11-20T05:21:44.825Z] Total : 25415.10 99.28 0.00 0.00 0.00 0.00 0.00 00:11:24.546 00:11:24.546 00:11:24.546 Latency(us) 00:11:24.546 [2024-11-20T05:21:44.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.546 Nvme0n1 : 10.00 25419.76 99.30 0.00 0.00 5032.24 2034.35 8738.13 00:11:24.546 [2024-11-20T05:21:44.825Z] =================================================================================================================== 00:11:24.546 [2024-11-20T05:21:44.825Z] Total : 25419.76 99.30 0.00 0.00 5032.24 2034.35 8738.13 00:11:24.546 { 00:11:24.546 "results": [ 00:11:24.546 { 00:11:24.546 "job": "Nvme0n1", 00:11:24.546 "core_mask": "0x2", 00:11:24.546 "workload": "randwrite", 00:11:24.546 "status": "finished", 00:11:24.546 "queue_depth": 128, 00:11:24.546 "io_size": 4096, 00:11:24.546 "runtime": 10.003202, 00:11:24.546 "iops": 25419.76059265823, 00:11:24.546 "mibps": 99.29593981507121, 00:11:24.546 "io_failed": 0, 00:11:24.546 "io_timeout": 0, 00:11:24.546 "avg_latency_us": 5032.239885584993, 00:11:24.546 "min_latency_us": 2034.3466666666666, 00:11:24.546 "max_latency_us": 8738.133333333333 00:11:24.546 } 00:11:24.546 ], 00:11:24.546 "core_count": 1 00:11:24.546 } 00:11:24.546 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2653915 00:11:24.546 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2653915 ']' 00:11:24.546 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2653915 00:11:24.546 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:11:24.546 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:24.546 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2653915 00:11:24.807 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:24.807 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:24.807 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2653915' 00:11:24.807 killing process with pid 2653915 00:11:24.807 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2653915 00:11:24.807 Received shutdown signal, test time was about 10.000000 seconds 00:11:24.807 00:11:24.807 Latency(us) 00:11:24.807 [2024-11-20T05:21:45.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.807 [2024-11-20T05:21:45.086Z] =================================================================================================================== 00:11:24.807 [2024-11-20T05:21:45.086Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:24.807 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2653915 00:11:24.807 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:25.068 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:25.068 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:25.068 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:25.328 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:25.328 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:25.328 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:25.328 [2024-11-20 06:21:45.589670] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:25.590 request: 00:11:25.590 { 00:11:25.590 "uuid": "cf832606-5afa-4150-93f8-4b5fc0b1dd96", 00:11:25.590 "method": "bdev_lvol_get_lvstores", 00:11:25.590 "req_id": 1 00:11:25.590 } 00:11:25.590 Got JSON-RPC error response 00:11:25.590 response: 00:11:25.590 { 00:11:25.590 "code": -19, 00:11:25.590 "message": "No such device" 00:11:25.590 } 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:25.590 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:25.851 aio_bdev 00:11:25.851 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e8260d89-d403-4ea6-aa40-9f416955ac83 00:11:25.851 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=e8260d89-d403-4ea6-aa40-9f416955ac83 00:11:25.851 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:25.851 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:11:25.851 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:25.851 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:25.851 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:25.851 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e8260d89-d403-4ea6-aa40-9f416955ac83 -t 2000 00:11:26.112 [ 00:11:26.112 { 00:11:26.112 "name": "e8260d89-d403-4ea6-aa40-9f416955ac83", 00:11:26.112 "aliases": [ 00:11:26.112 "lvs/lvol" 00:11:26.112 ], 00:11:26.112 "product_name": "Logical Volume", 00:11:26.112 "block_size": 4096, 00:11:26.112 "num_blocks": 38912, 00:11:26.112 "uuid": "e8260d89-d403-4ea6-aa40-9f416955ac83", 00:11:26.112 "assigned_rate_limits": { 00:11:26.112 "rw_ios_per_sec": 0, 00:11:26.112 "rw_mbytes_per_sec": 0, 00:11:26.112 "r_mbytes_per_sec": 0, 00:11:26.112 "w_mbytes_per_sec": 0 00:11:26.112 }, 00:11:26.112 "claimed": false, 00:11:26.112 "zoned": false, 00:11:26.112 "supported_io_types": { 00:11:26.112 "read": true, 00:11:26.112 "write": true, 00:11:26.112 "unmap": true, 00:11:26.112 "flush": false, 00:11:26.112 "reset": true, 00:11:26.112 "nvme_admin": false, 00:11:26.112 "nvme_io": false, 00:11:26.112 "nvme_io_md": false, 00:11:26.112 "write_zeroes": true, 00:11:26.112 "zcopy": false, 00:11:26.112 "get_zone_info": false, 00:11:26.112 "zone_management": false, 00:11:26.112 "zone_append": false, 00:11:26.112 "compare": false, 00:11:26.112 "compare_and_write": false, 00:11:26.112 "abort": false, 00:11:26.112 "seek_hole": true, 00:11:26.112 "seek_data": true, 00:11:26.112 "copy": false, 00:11:26.112 "nvme_iov_md": false 00:11:26.112 }, 00:11:26.112 "driver_specific": { 00:11:26.112 "lvol": { 00:11:26.112 "lvol_store_uuid": "cf832606-5afa-4150-93f8-4b5fc0b1dd96", 00:11:26.112 "base_bdev": "aio_bdev", 00:11:26.112 "thin_provision": false, 00:11:26.112 "num_allocated_clusters": 38, 00:11:26.112 "snapshot": false, 00:11:26.112 "clone": false, 00:11:26.112 "esnap_clone": false 00:11:26.112 } 00:11:26.112 } 00:11:26.112 } 00:11:26.112 ] 00:11:26.112 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:11:26.112 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:26.112 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:26.373 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:26.373 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:26.373 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:26.373 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:26.373 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e8260d89-d403-4ea6-aa40-9f416955ac83 00:11:26.634 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf832606-5afa-4150-93f8-4b5fc0b1dd96 00:11:26.895 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:26.895 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.895 00:11:26.895 real 0m15.745s 00:11:26.895 user 0m15.480s 00:11:26.895 sys 0m1.384s 00:11:26.895 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:26.895 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:26.895 ************************************ 00:11:26.895 END TEST lvs_grow_clean 00:11:26.895 ************************************ 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:27.156 ************************************ 00:11:27.156 START TEST lvs_grow_dirty 00:11:27.156 ************************************ 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:27.156 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:27.416 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:27.416 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:27.416 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=971a1042-02d3-465d-9183-9f61f52056d5 00:11:27.416 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:27.416 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:27.677 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:27.677 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:27.677 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 971a1042-02d3-465d-9183-9f61f52056d5 lvol 150 00:11:27.677 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=146f3abb-be72-4b47-9e5f-30cc1439193a 00:11:27.677 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:27.677 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:27.939 [2024-11-20 06:21:48.104371] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:27.939 [2024-11-20 06:21:48.104416] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:27.939 true 00:11:27.939 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:27.939 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:28.200 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:28.200 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:28.200 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 146f3abb-be72-4b47-9e5f-30cc1439193a 00:11:28.461 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:28.461 [2024-11-20 06:21:48.726294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2657008 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2657008 /var/tmp/bdevperf.sock 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2657008 ']' 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:28.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:28.722 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:28.722 [2024-11-20 06:21:48.955937] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:11:28.722 [2024-11-20 06:21:48.955987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657008 ] 00:11:28.984 [2024-11-20 06:21:49.036598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.984 [2024-11-20 06:21:49.066279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.555 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:29.555 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:11:29.555 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:29.816 Nvme0n1 00:11:29.816 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:30.076 [ 00:11:30.076 { 00:11:30.076 "name": "Nvme0n1", 00:11:30.076 "aliases": [ 00:11:30.076 "146f3abb-be72-4b47-9e5f-30cc1439193a" 00:11:30.076 ], 00:11:30.076 "product_name": "NVMe disk", 00:11:30.076 "block_size": 4096, 00:11:30.076 "num_blocks": 38912, 00:11:30.076 "uuid": "146f3abb-be72-4b47-9e5f-30cc1439193a", 00:11:30.076 "numa_id": 0, 00:11:30.076 "assigned_rate_limits": { 00:11:30.076 "rw_ios_per_sec": 0, 00:11:30.076 "rw_mbytes_per_sec": 0, 00:11:30.076 "r_mbytes_per_sec": 0, 00:11:30.076 "w_mbytes_per_sec": 0 00:11:30.076 }, 00:11:30.076 "claimed": false, 00:11:30.076 "zoned": false, 00:11:30.076 "supported_io_types": { 00:11:30.076 "read": true, 00:11:30.076 "write": true, 00:11:30.076 "unmap": true, 00:11:30.076 "flush": true, 00:11:30.076 "reset": true, 00:11:30.076 "nvme_admin": true, 00:11:30.076 "nvme_io": true, 00:11:30.076 "nvme_io_md": false, 00:11:30.076 "write_zeroes": true, 00:11:30.076 "zcopy": false, 00:11:30.076 "get_zone_info": false, 00:11:30.076 "zone_management": false, 00:11:30.076 "zone_append": false, 00:11:30.076 "compare": true, 00:11:30.076 "compare_and_write": true, 00:11:30.076 "abort": true, 00:11:30.076 "seek_hole": false, 00:11:30.076 "seek_data": false, 00:11:30.076 "copy": true, 00:11:30.076 "nvme_iov_md": false 00:11:30.076 }, 00:11:30.076 "memory_domains": [ 00:11:30.076 { 00:11:30.076 "dma_device_id": "system", 00:11:30.076 "dma_device_type": 1 00:11:30.076 } 00:11:30.076 ], 00:11:30.076 "driver_specific": { 00:11:30.076 "nvme": [ 00:11:30.076 { 00:11:30.076 "trid": { 00:11:30.076 "trtype": "TCP", 00:11:30.076 "adrfam": "IPv4", 00:11:30.076 "traddr": "10.0.0.2", 00:11:30.076 "trsvcid": "4420", 00:11:30.076 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:30.076 }, 00:11:30.076 "ctrlr_data": { 00:11:30.076 "cntlid": 1, 00:11:30.076 "vendor_id": "0x8086", 00:11:30.076 "model_number": "SPDK bdev Controller", 00:11:30.076 "serial_number": "SPDK0", 00:11:30.076 "firmware_revision": "25.01", 00:11:30.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:30.076 "oacs": { 00:11:30.076 "security": 0, 00:11:30.076 "format": 0, 00:11:30.076 "firmware": 0, 00:11:30.076 "ns_manage": 0 00:11:30.076 }, 00:11:30.076 "multi_ctrlr": true, 00:11:30.076 "ana_reporting": false 00:11:30.076 }, 00:11:30.076 "vs": { 00:11:30.076 "nvme_version": "1.3" 00:11:30.076 }, 00:11:30.076 "ns_data": { 00:11:30.076 "id": 1, 00:11:30.076 "can_share": true 00:11:30.076 } 00:11:30.076 } 00:11:30.076 ], 00:11:30.076 "mp_policy": "active_passive" 00:11:30.076 } 00:11:30.076 } 00:11:30.076 ] 00:11:30.076 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2657345 00:11:30.076 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:30.076 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:30.076 Running I/O for 10 seconds... 00:11:31.017 Latency(us) 00:11:31.017 [2024-11-20T05:21:51.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.017 Nvme0n1 : 1.00 24981.00 97.58 0.00 0.00 0.00 0.00 0.00 00:11:31.018 [2024-11-20T05:21:51.297Z] =================================================================================================================== 00:11:31.018 [2024-11-20T05:21:51.297Z] Total : 24981.00 97.58 0.00 0.00 0.00 0.00 0.00 00:11:31.018 00:11:31.971 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:32.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.230 Nvme0n1 : 2.00 25137.00 98.19 0.00 0.00 0.00 0.00 0.00 00:11:32.231 [2024-11-20T05:21:52.510Z] =================================================================================================================== 00:11:32.231 [2024-11-20T05:21:52.510Z] Total : 25137.00 98.19 0.00 0.00 0.00 0.00 0.00 00:11:32.231 00:11:32.231 true 00:11:32.231 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:32.231 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:32.491 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:32.491 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:32.491 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2657345 00:11:33.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.063 Nvme0n1 : 3.00 25221.67 98.52 0.00 0.00 0.00 0.00 0.00 00:11:33.063 [2024-11-20T05:21:53.342Z] =================================================================================================================== 00:11:33.063 [2024-11-20T05:21:53.342Z] Total : 25221.67 98.52 0.00 0.00 0.00 0.00 0.00 00:11:33.063 00:11:34.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.447 Nvme0n1 : 4.00 25268.50 98.71 0.00 0.00 0.00 0.00 0.00 00:11:34.447 [2024-11-20T05:21:54.726Z] =================================================================================================================== 00:11:34.447 [2024-11-20T05:21:54.726Z] Total : 25268.50 98.71 0.00 0.00 0.00 0.00 0.00 00:11:34.447 00:11:35.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.390 Nvme0n1 : 5.00 25309.20 98.86 0.00 0.00 0.00 0.00 0.00 00:11:35.390 [2024-11-20T05:21:55.669Z] =================================================================================================================== 00:11:35.390 [2024-11-20T05:21:55.669Z] Total : 25309.20 98.86 0.00 0.00 0.00 0.00 0.00 00:11:35.390 00:11:36.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.331 Nvme0n1 : 6.00 25346.50 99.01 0.00 0.00 0.00 0.00 0.00 00:11:36.331 [2024-11-20T05:21:56.610Z] =================================================================================================================== 00:11:36.331 [2024-11-20T05:21:56.610Z] Total : 25346.50 99.01 0.00 0.00 0.00 0.00 0.00 00:11:36.331 00:11:37.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.272 Nvme0n1 : 7.00 25364.14 99.08 0.00 0.00 0.00 0.00 0.00 00:11:37.272 [2024-11-20T05:21:57.551Z] =================================================================================================================== 00:11:37.272 [2024-11-20T05:21:57.551Z] Total : 25364.14 99.08 0.00 0.00 0.00 0.00 0.00 00:11:37.272 00:11:38.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.213 Nvme0n1 : 8.00 25385.62 99.16 0.00 0.00 0.00 0.00 0.00 00:11:38.213 [2024-11-20T05:21:58.492Z] =================================================================================================================== 00:11:38.213 [2024-11-20T05:21:58.492Z] Total : 25385.62 99.16 0.00 0.00 0.00 0.00 0.00 00:11:38.213 00:11:39.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.154 Nvme0n1 : 9.00 25402.33 99.23 0.00 0.00 0.00 0.00 0.00 00:11:39.154 [2024-11-20T05:21:59.433Z] =================================================================================================================== 00:11:39.154 [2024-11-20T05:21:59.433Z] Total : 25402.33 99.23 0.00 0.00 0.00 0.00 0.00 00:11:39.154 00:11:40.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.096 Nvme0n1 : 10.00 25409.00 99.25 0.00 0.00 0.00 0.00 0.00 00:11:40.096 [2024-11-20T05:22:00.375Z] =================================================================================================================== 00:11:40.096 [2024-11-20T05:22:00.375Z] Total : 25409.00 99.25 0.00 0.00 0.00 0.00 0.00 00:11:40.096 00:11:40.096 00:11:40.096 Latency(us) 00:11:40.096 [2024-11-20T05:22:00.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.096 Nvme0n1 : 10.00 25406.52 99.24 0.00 0.00 5034.77 3085.65 13762.56 00:11:40.096 [2024-11-20T05:22:00.375Z] =================================================================================================================== 00:11:40.096 [2024-11-20T05:22:00.375Z] Total : 25406.52 99.24 0.00 0.00 5034.77 3085.65 13762.56 00:11:40.096 { 00:11:40.096 "results": [ 00:11:40.096 { 00:11:40.096 "job": "Nvme0n1", 00:11:40.096 "core_mask": "0x2", 00:11:40.096 "workload": "randwrite", 00:11:40.096 "status": "finished", 00:11:40.096 "queue_depth": 128, 00:11:40.096 "io_size": 4096, 00:11:40.096 "runtime": 10.003535, 00:11:40.096 "iops": 25406.518795605753, 00:11:40.096 "mibps": 99.24421404533497, 00:11:40.096 "io_failed": 0, 00:11:40.096 "io_timeout": 0, 00:11:40.096 "avg_latency_us": 5034.773234705855, 00:11:40.096 "min_latency_us": 3085.653333333333, 00:11:40.096 "max_latency_us": 13762.56 00:11:40.096 } 00:11:40.096 ], 00:11:40.096 "core_count": 1 00:11:40.096 } 00:11:40.096 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2657008 00:11:40.096 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2657008 ']' 00:11:40.096 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2657008 00:11:40.096 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:11:40.096 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:40.096 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2657008 00:11:40.357 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:40.357 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:40.357 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2657008' 00:11:40.357 killing process with pid 2657008 00:11:40.357 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2657008 00:11:40.357 Received shutdown signal, test time was about 10.000000 seconds 00:11:40.357 00:11:40.357 Latency(us) 00:11:40.357 [2024-11-20T05:22:00.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.357 [2024-11-20T05:22:00.636Z] =================================================================================================================== 00:11:40.357 [2024-11-20T05:22:00.636Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:40.357 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2657008 00:11:40.357 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:40.618 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:40.618 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:40.618 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2653204 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2653204 00:11:40.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2653204 Killed "${NVMF_APP[@]}" "$@" 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2659382 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2659382 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2659382 ']' 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:40.879 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:41.140 [2024-11-20 06:22:01.158773] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:11:41.140 [2024-11-20 06:22:01.158829] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.140 [2024-11-20 06:22:01.248874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.140 [2024-11-20 06:22:01.277509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.140 [2024-11-20 06:22:01.277536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.140 [2024-11-20 06:22:01.277541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.140 [2024-11-20 06:22:01.277546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.140 [2024-11-20 06:22:01.277550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.140 [2024-11-20 06:22:01.278005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.712 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:41.712 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:11:41.712 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.712 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:41.712 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:41.712 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.712 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:41.973 [2024-11-20 06:22:02.132224] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:41.973 [2024-11-20 06:22:02.132295] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:41.973 [2024-11-20 06:22:02.132317] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:41.973 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:41.973 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 146f3abb-be72-4b47-9e5f-30cc1439193a 00:11:41.973 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=146f3abb-be72-4b47-9e5f-30cc1439193a 00:11:41.973 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:41.973 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:41.973 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:41.973 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:41.973 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:42.233 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 146f3abb-be72-4b47-9e5f-30cc1439193a -t 2000 00:11:42.233 [ 00:11:42.233 { 00:11:42.233 "name": "146f3abb-be72-4b47-9e5f-30cc1439193a", 00:11:42.233 "aliases": [ 00:11:42.233 "lvs/lvol" 00:11:42.233 ], 00:11:42.233 "product_name": "Logical Volume", 00:11:42.233 "block_size": 4096, 00:11:42.233 "num_blocks": 38912, 00:11:42.233 "uuid": "146f3abb-be72-4b47-9e5f-30cc1439193a", 00:11:42.233 "assigned_rate_limits": { 00:11:42.233 "rw_ios_per_sec": 0, 00:11:42.233 "rw_mbytes_per_sec": 0, 00:11:42.233 "r_mbytes_per_sec": 0, 00:11:42.233 "w_mbytes_per_sec": 0 00:11:42.233 }, 00:11:42.233 "claimed": false, 00:11:42.233 "zoned": false, 00:11:42.233 "supported_io_types": { 00:11:42.233 "read": true, 00:11:42.233 "write": true, 00:11:42.233 "unmap": true, 00:11:42.233 "flush": false, 00:11:42.233 "reset": true, 00:11:42.233 "nvme_admin": false, 00:11:42.233 "nvme_io": false, 00:11:42.233 "nvme_io_md": false, 00:11:42.233 "write_zeroes": true, 00:11:42.233 "zcopy": false, 00:11:42.233 "get_zone_info": false, 00:11:42.233 "zone_management": false, 00:11:42.233 "zone_append": false, 00:11:42.233 "compare": false, 00:11:42.233 "compare_and_write": false, 00:11:42.233 "abort": false, 00:11:42.233 "seek_hole": true, 00:11:42.233 "seek_data": true, 00:11:42.233 "copy": false, 00:11:42.233 "nvme_iov_md": false 00:11:42.233 }, 00:11:42.233 "driver_specific": { 00:11:42.234 "lvol": { 00:11:42.234 "lvol_store_uuid": "971a1042-02d3-465d-9183-9f61f52056d5", 00:11:42.234 "base_bdev": "aio_bdev", 00:11:42.234 "thin_provision": false, 00:11:42.234 "num_allocated_clusters": 38, 00:11:42.234 "snapshot": false, 00:11:42.234 "clone": false, 00:11:42.234 "esnap_clone": false 00:11:42.234 } 00:11:42.234 } 00:11:42.234 } 00:11:42.234 ] 00:11:42.234 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:42.234 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:42.234 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:42.493 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:42.493 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:42.493 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:42.754 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:42.754 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:42.754 [2024-11-20 06:22:02.996903] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:42.755 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:43.015 request: 00:11:43.015 { 00:11:43.015 "uuid": "971a1042-02d3-465d-9183-9f61f52056d5", 00:11:43.015 "method": "bdev_lvol_get_lvstores", 00:11:43.015 "req_id": 1 00:11:43.015 } 00:11:43.015 Got JSON-RPC error response 00:11:43.015 response: 00:11:43.015 { 00:11:43.015 "code": -19, 00:11:43.015 "message": "No such device" 00:11:43.015 } 00:11:43.015 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:43.015 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:43.015 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:43.015 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:43.015 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:43.276 aio_bdev 00:11:43.276 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 146f3abb-be72-4b47-9e5f-30cc1439193a 00:11:43.276 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=146f3abb-be72-4b47-9e5f-30cc1439193a 00:11:43.276 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:43.276 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:43.276 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:43.276 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:43.276 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:43.276 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 146f3abb-be72-4b47-9e5f-30cc1439193a -t 2000 00:11:43.538 [ 00:11:43.538 { 00:11:43.538 "name": "146f3abb-be72-4b47-9e5f-30cc1439193a", 00:11:43.538 "aliases": [ 00:11:43.538 "lvs/lvol" 00:11:43.538 ], 00:11:43.538 "product_name": "Logical Volume", 00:11:43.538 "block_size": 4096, 00:11:43.538 "num_blocks": 38912, 00:11:43.538 "uuid": "146f3abb-be72-4b47-9e5f-30cc1439193a", 00:11:43.538 "assigned_rate_limits": { 00:11:43.538 "rw_ios_per_sec": 0, 00:11:43.538 "rw_mbytes_per_sec": 0, 00:11:43.538 "r_mbytes_per_sec": 0, 00:11:43.538 "w_mbytes_per_sec": 0 00:11:43.538 }, 00:11:43.538 "claimed": false, 00:11:43.538 "zoned": false, 00:11:43.538 "supported_io_types": { 00:11:43.538 "read": true, 00:11:43.538 "write": true, 00:11:43.538 "unmap": true, 00:11:43.538 "flush": false, 00:11:43.538 "reset": true, 00:11:43.538 "nvme_admin": false, 00:11:43.538 "nvme_io": false, 00:11:43.538 "nvme_io_md": false, 00:11:43.538 "write_zeroes": true, 00:11:43.538 "zcopy": false, 00:11:43.538 "get_zone_info": false, 00:11:43.538 "zone_management": false, 00:11:43.538 "zone_append": false, 00:11:43.538 "compare": false, 00:11:43.538 "compare_and_write": false, 00:11:43.538 "abort": false, 00:11:43.538 "seek_hole": true, 00:11:43.538 "seek_data": true, 00:11:43.538 "copy": false, 00:11:43.538 "nvme_iov_md": false 00:11:43.538 }, 00:11:43.538 "driver_specific": { 00:11:43.538 "lvol": { 00:11:43.538 "lvol_store_uuid": "971a1042-02d3-465d-9183-9f61f52056d5", 00:11:43.538 "base_bdev": "aio_bdev", 00:11:43.538 "thin_provision": false, 00:11:43.538 "num_allocated_clusters": 38, 00:11:43.538 "snapshot": false, 00:11:43.538 "clone": false, 00:11:43.538 "esnap_clone": false 00:11:43.538 } 00:11:43.538 } 00:11:43.538 } 00:11:43.538 ] 00:11:43.538 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:43.538 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:43.538 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:43.799 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:43.799 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:43.799 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:43.799 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:43.799 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 146f3abb-be72-4b47-9e5f-30cc1439193a 00:11:44.060 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 971a1042-02d3-465d-9183-9f61f52056d5 00:11:44.320 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:44.320 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:44.320 00:11:44.320 real 0m17.336s 00:11:44.320 user 0m45.568s 00:11:44.320 sys 0m2.927s 00:11:44.320 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.320 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:44.320 ************************************ 00:11:44.320 END TEST lvs_grow_dirty 00:11:44.320 ************************************ 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:44.581 nvmf_trace.0 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:44.581 rmmod nvme_tcp 00:11:44.581 rmmod nvme_fabrics 00:11:44.581 rmmod nvme_keyring 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2659382 ']' 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2659382 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2659382 ']' 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2659382 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2659382 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2659382' 00:11:44.581 killing process with pid 2659382 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2659382 00:11:44.581 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2659382 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.842 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.758 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:46.758 00:11:46.758 real 0m44.459s 00:11:46.758 user 1m7.438s 00:11:46.758 sys 0m10.385s 00:11:46.758 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:46.758 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:46.758 ************************************ 00:11:46.758 END TEST nvmf_lvs_grow 00:11:46.758 ************************************ 00:11:46.758 06:22:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:46.758 06:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:46.758 06:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:46.758 06:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:47.020 ************************************ 00:11:47.020 START TEST nvmf_bdev_io_wait 00:11:47.020 ************************************ 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:47.020 * Looking for test storage... 00:11:47.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:47.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.020 --rc genhtml_branch_coverage=1 00:11:47.020 --rc genhtml_function_coverage=1 00:11:47.020 --rc genhtml_legend=1 00:11:47.020 --rc geninfo_all_blocks=1 00:11:47.020 --rc geninfo_unexecuted_blocks=1 00:11:47.020 00:11:47.020 ' 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:47.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.020 --rc genhtml_branch_coverage=1 00:11:47.020 --rc genhtml_function_coverage=1 00:11:47.020 --rc genhtml_legend=1 00:11:47.020 --rc geninfo_all_blocks=1 00:11:47.020 --rc geninfo_unexecuted_blocks=1 00:11:47.020 00:11:47.020 ' 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:47.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.020 --rc genhtml_branch_coverage=1 00:11:47.020 --rc genhtml_function_coverage=1 00:11:47.020 --rc genhtml_legend=1 00:11:47.020 --rc geninfo_all_blocks=1 00:11:47.020 --rc geninfo_unexecuted_blocks=1 00:11:47.020 00:11:47.020 ' 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:47.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.020 --rc genhtml_branch_coverage=1 00:11:47.020 --rc genhtml_function_coverage=1 00:11:47.020 --rc genhtml_legend=1 00:11:47.020 --rc geninfo_all_blocks=1 00:11:47.020 --rc geninfo_unexecuted_blocks=1 00:11:47.020 00:11:47.020 ' 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.020 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.021 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.281 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.281 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:47.281 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.281 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.281 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.282 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.282 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.282 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.282 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.282 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.282 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:47.282 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:47.282 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.282 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.430 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.430 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:55.430 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:55.430 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:55.430 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:55.430 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:55.430 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:55.431 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:55.431 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:55.431 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:55.431 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:55.431 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:55.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:11:55.432 00:11:55.432 --- 10.0.0.2 ping statistics --- 00:11:55.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.432 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:55.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:11:55.432 00:11:55.432 --- 10.0.0.1 ping statistics --- 00:11:55.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.432 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2664454 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2664454 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2664454 ']' 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:55.432 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.432 [2024-11-20 06:22:14.900551] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:11:55.432 [2024-11-20 06:22:14.900615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.432 [2024-11-20 06:22:14.999724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.432 [2024-11-20 06:22:15.054103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.432 [2024-11-20 06:22:15.054154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.432 [2024-11-20 06:22:15.054178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.432 [2024-11-20 06:22:15.054186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.432 [2024-11-20 06:22:15.054192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.432 [2024-11-20 06:22:15.056595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.432 [2024-11-20 06:22:15.056756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.432 [2024-11-20 06:22:15.056922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.432 [2024-11-20 06:22:15.056922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.695 [2024-11-20 06:22:15.831905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.695 Malloc0 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:55.695 [2024-11-20 06:22:15.897466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.695 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2664792 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2664795 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:55.696 { 00:11:55.696 "params": { 00:11:55.696 "name": "Nvme$subsystem", 00:11:55.696 "trtype": "$TEST_TRANSPORT", 00:11:55.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:55.696 "adrfam": "ipv4", 00:11:55.696 "trsvcid": "$NVMF_PORT", 00:11:55.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:55.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:55.696 "hdgst": ${hdgst:-false}, 00:11:55.696 "ddgst": ${ddgst:-false} 00:11:55.696 }, 00:11:55.696 "method": "bdev_nvme_attach_controller" 00:11:55.696 } 00:11:55.696 EOF 00:11:55.696 )") 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2664797 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2664801 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:55.696 { 00:11:55.696 "params": { 00:11:55.696 "name": "Nvme$subsystem", 00:11:55.696 "trtype": "$TEST_TRANSPORT", 00:11:55.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:55.696 "adrfam": "ipv4", 00:11:55.696 "trsvcid": "$NVMF_PORT", 00:11:55.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:55.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:55.696 "hdgst": ${hdgst:-false}, 00:11:55.696 "ddgst": ${ddgst:-false} 00:11:55.696 }, 00:11:55.696 "method": "bdev_nvme_attach_controller" 00:11:55.696 } 00:11:55.696 EOF 00:11:55.696 )") 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:55.696 { 00:11:55.696 "params": { 00:11:55.696 "name": "Nvme$subsystem", 00:11:55.696 "trtype": "$TEST_TRANSPORT", 00:11:55.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:55.696 "adrfam": "ipv4", 00:11:55.696 "trsvcid": "$NVMF_PORT", 00:11:55.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:55.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:55.696 "hdgst": ${hdgst:-false}, 00:11:55.696 "ddgst": ${ddgst:-false} 00:11:55.696 }, 00:11:55.696 "method": "bdev_nvme_attach_controller" 00:11:55.696 } 00:11:55.696 EOF 00:11:55.696 )") 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:55.696 { 00:11:55.696 "params": { 00:11:55.696 "name": "Nvme$subsystem", 00:11:55.696 "trtype": "$TEST_TRANSPORT", 00:11:55.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:55.696 "adrfam": "ipv4", 00:11:55.696 "trsvcid": "$NVMF_PORT", 00:11:55.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:55.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:55.696 "hdgst": ${hdgst:-false}, 00:11:55.696 "ddgst": ${ddgst:-false} 00:11:55.696 }, 00:11:55.696 "method": "bdev_nvme_attach_controller" 00:11:55.696 } 00:11:55.696 EOF 00:11:55.696 )") 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2664792 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:55.696 "params": { 00:11:55.696 "name": "Nvme1", 00:11:55.696 "trtype": "tcp", 00:11:55.696 "traddr": "10.0.0.2", 00:11:55.696 "adrfam": "ipv4", 00:11:55.696 "trsvcid": "4420", 00:11:55.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:55.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:55.696 "hdgst": false, 00:11:55.696 "ddgst": false 00:11:55.696 }, 00:11:55.696 "method": "bdev_nvme_attach_controller" 00:11:55.696 }' 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:55.696 "params": { 00:11:55.696 "name": "Nvme1", 00:11:55.696 "trtype": "tcp", 00:11:55.696 "traddr": "10.0.0.2", 00:11:55.696 "adrfam": "ipv4", 00:11:55.696 "trsvcid": "4420", 00:11:55.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:55.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:55.696 "hdgst": false, 00:11:55.696 "ddgst": false 00:11:55.696 }, 00:11:55.696 "method": "bdev_nvme_attach_controller" 00:11:55.696 }' 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:55.696 "params": { 00:11:55.696 "name": "Nvme1", 00:11:55.696 "trtype": "tcp", 00:11:55.696 "traddr": "10.0.0.2", 00:11:55.696 "adrfam": "ipv4", 00:11:55.696 "trsvcid": "4420", 00:11:55.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:55.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:55.696 "hdgst": false, 00:11:55.696 "ddgst": false 00:11:55.696 }, 00:11:55.696 "method": "bdev_nvme_attach_controller" 00:11:55.696 }' 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:55.696 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:55.696 "params": { 00:11:55.696 "name": "Nvme1", 00:11:55.696 "trtype": "tcp", 00:11:55.696 "traddr": "10.0.0.2", 00:11:55.696 "adrfam": "ipv4", 00:11:55.696 "trsvcid": "4420", 00:11:55.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:55.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:55.696 "hdgst": false, 00:11:55.696 "ddgst": false 00:11:55.696 }, 00:11:55.696 "method": "bdev_nvme_attach_controller" 00:11:55.696 }' 00:11:55.696 [2024-11-20 06:22:15.957633] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:11:55.696 [2024-11-20 06:22:15.957699] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:55.696 [2024-11-20 06:22:15.959375] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:11:55.696 [2024-11-20 06:22:15.959453] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:55.696 [2024-11-20 06:22:15.962971] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:11:55.696 [2024-11-20 06:22:15.963030] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:55.697 [2024-11-20 06:22:15.963648] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:11:55.697 [2024-11-20 06:22:15.963733] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:55.958 [2024-11-20 06:22:16.165329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.958 [2024-11-20 06:22:16.204007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:56.219 [2024-11-20 06:22:16.260680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.219 [2024-11-20 06:22:16.301956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:56.219 [2024-11-20 06:22:16.325195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.219 [2024-11-20 06:22:16.364275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:56.219 [2024-11-20 06:22:16.421593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.219 [2024-11-20 06:22:16.464129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:56.481 Running I/O for 1 seconds... 00:11:56.481 Running I/O for 1 seconds... 00:11:56.481 Running I/O for 1 seconds... 00:11:56.741 Running I/O for 1 seconds... 00:11:57.312 188600.00 IOPS, 736.72 MiB/s 00:11:57.312 Latency(us) 00:11:57.312 [2024-11-20T05:22:17.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.312 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:57.312 Nvme1n1 : 1.00 188221.58 735.24 0.00 0.00 675.91 300.37 1993.39 00:11:57.312 [2024-11-20T05:22:17.591Z] =================================================================================================================== 00:11:57.312 [2024-11-20T05:22:17.591Z] Total : 188221.58 735.24 0.00 0.00 675.91 300.37 1993.39 00:11:57.572 6817.00 IOPS, 26.63 MiB/s 00:11:57.572 Latency(us) 00:11:57.572 [2024-11-20T05:22:17.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.572 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:57.572 Nvme1n1 : 1.02 6813.94 26.62 0.00 0.00 18602.96 7700.48 31020.37 00:11:57.572 [2024-11-20T05:22:17.851Z] =================================================================================================================== 00:11:57.572 [2024-11-20T05:22:17.851Z] Total : 6813.94 26.62 0.00 0.00 18602.96 7700.48 31020.37 00:11:57.572 12303.00 IOPS, 48.06 MiB/s 00:11:57.572 Latency(us) 00:11:57.572 [2024-11-20T05:22:17.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.572 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:57.572 Nvme1n1 : 1.01 12358.31 48.27 0.00 0.00 10320.79 5406.72 19442.35 00:11:57.572 [2024-11-20T05:22:17.851Z] =================================================================================================================== 00:11:57.572 [2024-11-20T05:22:17.851Z] Total : 12358.31 48.27 0.00 0.00 10320.79 5406.72 19442.35 00:11:57.572 6782.00 IOPS, 26.49 MiB/s 00:11:57.572 Latency(us) 00:11:57.572 [2024-11-20T05:22:17.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.572 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:57.572 Nvme1n1 : 1.01 6902.83 26.96 0.00 0.00 18489.11 4450.99 42161.49 00:11:57.572 [2024-11-20T05:22:17.851Z] =================================================================================================================== 00:11:57.572 [2024-11-20T05:22:17.851Z] Total : 6902.83 26.96 0.00 0.00 18489.11 4450.99 42161.49 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2664795 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2664797 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2664801 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:57.833 rmmod nvme_tcp 00:11:57.833 rmmod nvme_fabrics 00:11:57.833 rmmod nvme_keyring 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2664454 ']' 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2664454 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2664454 ']' 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2664454 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:57.833 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2664454 00:11:57.833 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:57.833 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:57.833 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2664454' 00:11:57.833 killing process with pid 2664454 00:11:57.833 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2664454 00:11:57.833 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2664454 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.117 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.164 00:12:00.164 real 0m13.154s 00:12:00.164 user 0m19.917s 00:12:00.164 sys 0m7.502s 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:00.164 ************************************ 00:12:00.164 END TEST nvmf_bdev_io_wait 00:12:00.164 ************************************ 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:00.164 ************************************ 00:12:00.164 START TEST nvmf_queue_depth 00:12:00.164 ************************************ 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:00.164 * Looking for test storage... 00:12:00.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:12:00.164 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:00.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.426 --rc genhtml_branch_coverage=1 00:12:00.426 --rc genhtml_function_coverage=1 00:12:00.426 --rc genhtml_legend=1 00:12:00.426 --rc geninfo_all_blocks=1 00:12:00.426 --rc geninfo_unexecuted_blocks=1 00:12:00.426 00:12:00.426 ' 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:00.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.426 --rc genhtml_branch_coverage=1 00:12:00.426 --rc genhtml_function_coverage=1 00:12:00.426 --rc genhtml_legend=1 00:12:00.426 --rc geninfo_all_blocks=1 00:12:00.426 --rc geninfo_unexecuted_blocks=1 00:12:00.426 00:12:00.426 ' 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:00.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.426 --rc genhtml_branch_coverage=1 00:12:00.426 --rc genhtml_function_coverage=1 00:12:00.426 --rc genhtml_legend=1 00:12:00.426 --rc geninfo_all_blocks=1 00:12:00.426 --rc geninfo_unexecuted_blocks=1 00:12:00.426 00:12:00.426 ' 00:12:00.426 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:00.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.427 --rc genhtml_branch_coverage=1 00:12:00.427 --rc genhtml_function_coverage=1 00:12:00.427 --rc genhtml_legend=1 00:12:00.427 --rc geninfo_all_blocks=1 00:12:00.427 --rc geninfo_unexecuted_blocks=1 00:12:00.427 00:12:00.427 ' 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.427 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.567 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.567 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:12:08.567 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:08.567 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:08.568 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:08.568 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:08.568 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:08.568 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.568 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:08.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:12:08.569 00:12:08.569 --- 10.0.0.2 ping statistics --- 00:12:08.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.569 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:12:08.569 00:12:08.569 --- 10.0.0.1 ping statistics --- 00:12:08.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.569 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2669424 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2669424 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2669424 ']' 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:08.569 06:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.569 [2024-11-20 06:22:28.039528] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:12:08.569 [2024-11-20 06:22:28.039593] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.569 [2024-11-20 06:22:28.140262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.569 [2024-11-20 06:22:28.190266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.569 [2024-11-20 06:22:28.190317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.569 [2024-11-20 06:22:28.190326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.569 [2024-11-20 06:22:28.190333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.569 [2024-11-20 06:22:28.190339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.569 [2024-11-20 06:22:28.191091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.830 [2024-11-20 06:22:28.898314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.830 Malloc0 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.830 [2024-11-20 06:22:28.959401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2669553 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2669553 /var/tmp/bdevperf.sock 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2669553 ']' 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:08.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:08.830 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.830 [2024-11-20 06:22:29.018817] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:12:08.830 [2024-11-20 06:22:29.018877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669553 ] 00:12:09.090 [2024-11-20 06:22:29.110444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.090 [2024-11-20 06:22:29.162591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.662 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:09.662 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:12:09.662 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:09.662 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.662 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:09.662 NVMe0n1 00:12:09.662 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.662 06:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:09.924 Running I/O for 10 seconds... 00:12:11.805 9250.00 IOPS, 36.13 MiB/s [2024-11-20T05:22:33.024Z] 10663.50 IOPS, 41.65 MiB/s [2024-11-20T05:22:34.403Z] 10924.33 IOPS, 42.67 MiB/s [2024-11-20T05:22:35.343Z] 11264.00 IOPS, 44.00 MiB/s [2024-11-20T05:22:36.283Z] 11674.60 IOPS, 45.60 MiB/s [2024-11-20T05:22:37.222Z] 11948.50 IOPS, 46.67 MiB/s [2024-11-20T05:22:38.164Z] 12161.71 IOPS, 47.51 MiB/s [2024-11-20T05:22:39.104Z] 12351.50 IOPS, 48.25 MiB/s [2024-11-20T05:22:40.043Z] 12512.56 IOPS, 48.88 MiB/s [2024-11-20T05:22:40.304Z] 12622.10 IOPS, 49.31 MiB/s 00:12:20.025 Latency(us) 00:12:20.025 [2024-11-20T05:22:40.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.025 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:20.025 Verification LBA range: start 0x0 length 0x4000 00:12:20.025 NVMe0n1 : 10.05 12660.84 49.46 0.00 0.00 80585.41 6034.77 70341.97 00:12:20.025 [2024-11-20T05:22:40.304Z] =================================================================================================================== 00:12:20.025 [2024-11-20T05:22:40.304Z] Total : 12660.84 49.46 0.00 0.00 80585.41 6034.77 70341.97 00:12:20.025 { 00:12:20.025 "results": [ 00:12:20.025 { 00:12:20.025 "job": "NVMe0n1", 00:12:20.025 "core_mask": "0x1", 00:12:20.025 "workload": "verify", 00:12:20.025 "status": "finished", 00:12:20.025 "verify_range": { 00:12:20.025 "start": 0, 00:12:20.025 "length": 16384 00:12:20.025 }, 00:12:20.025 "queue_depth": 1024, 00:12:20.025 "io_size": 4096, 00:12:20.025 "runtime": 10.045466, 00:12:20.025 "iops": 12660.836241942385, 00:12:20.025 "mibps": 49.45639157008744, 00:12:20.025 "io_failed": 0, 00:12:20.025 "io_timeout": 0, 00:12:20.025 "avg_latency_us": 80585.40513104374, 00:12:20.025 "min_latency_us": 6034.7733333333335, 00:12:20.025 "max_latency_us": 70341.97333333333 00:12:20.025 } 00:12:20.025 ], 00:12:20.025 "core_count": 1 00:12:20.025 } 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2669553 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2669553 ']' 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2669553 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2669553 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2669553' 00:12:20.025 killing process with pid 2669553 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2669553 00:12:20.025 Received shutdown signal, test time was about 10.000000 seconds 00:12:20.025 00:12:20.025 Latency(us) 00:12:20.025 [2024-11-20T05:22:40.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.025 [2024-11-20T05:22:40.304Z] =================================================================================================================== 00:12:20.025 [2024-11-20T05:22:40.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2669553 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.025 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.025 rmmod nvme_tcp 00:12:20.025 rmmod nvme_fabrics 00:12:20.286 rmmod nvme_keyring 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2669424 ']' 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2669424 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2669424 ']' 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2669424 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2669424 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2669424' 00:12:20.286 killing process with pid 2669424 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2669424 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2669424 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.286 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.832 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:22.832 00:12:22.832 real 0m22.298s 00:12:22.832 user 0m25.513s 00:12:22.832 sys 0m6.995s 00:12:22.832 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:22.832 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:22.832 ************************************ 00:12:22.832 END TEST nvmf_queue_depth 00:12:22.832 ************************************ 00:12:22.832 06:22:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:22.832 06:22:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:22.832 06:22:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:22.832 06:22:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:22.832 ************************************ 00:12:22.832 START TEST nvmf_target_multipath 00:12:22.832 ************************************ 00:12:22.832 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:22.832 * Looking for test storage... 00:12:22.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.832 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:22.832 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:22.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.833 --rc genhtml_branch_coverage=1 00:12:22.833 --rc genhtml_function_coverage=1 00:12:22.833 --rc genhtml_legend=1 00:12:22.833 --rc geninfo_all_blocks=1 00:12:22.833 --rc geninfo_unexecuted_blocks=1 00:12:22.833 00:12:22.833 ' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:22.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.833 --rc genhtml_branch_coverage=1 00:12:22.833 --rc genhtml_function_coverage=1 00:12:22.833 --rc genhtml_legend=1 00:12:22.833 --rc geninfo_all_blocks=1 00:12:22.833 --rc geninfo_unexecuted_blocks=1 00:12:22.833 00:12:22.833 ' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:22.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.833 --rc genhtml_branch_coverage=1 00:12:22.833 --rc genhtml_function_coverage=1 00:12:22.833 --rc genhtml_legend=1 00:12:22.833 --rc geninfo_all_blocks=1 00:12:22.833 --rc geninfo_unexecuted_blocks=1 00:12:22.833 00:12:22.833 ' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:22.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.833 --rc genhtml_branch_coverage=1 00:12:22.833 --rc genhtml_function_coverage=1 00:12:22.833 --rc genhtml_legend=1 00:12:22.833 --rc geninfo_all_blocks=1 00:12:22.833 --rc geninfo_unexecuted_blocks=1 00:12:22.833 00:12:22.833 ' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.833 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:12:22.834 06:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:30.975 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.975 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:30.975 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:30.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:30.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:30.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.720 ms 00:12:30.976 00:12:30.976 --- 10.0.0.2 ping statistics --- 00:12:30.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.976 rtt min/avg/max/mdev = 0.720/0.720/0.720/0.000 ms 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:12:30.976 00:12:30.976 --- 10.0.0.1 ping statistics --- 00:12:30.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.976 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:30.976 only one NIC for nvmf test 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.976 rmmod nvme_tcp 00:12:30.976 rmmod nvme_fabrics 00:12:30.976 rmmod nvme_keyring 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:30.976 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:30.977 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:30.977 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:30.977 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:30.977 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:30.977 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:30.977 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:30.977 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.977 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.977 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.360 00:12:32.360 real 0m9.933s 00:12:32.360 user 0m2.096s 00:12:32.360 sys 0m5.785s 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:32.360 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:32.360 ************************************ 00:12:32.360 END TEST nvmf_target_multipath 00:12:32.360 ************************************ 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:32.621 ************************************ 00:12:32.621 START TEST nvmf_zcopy 00:12:32.621 ************************************ 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:32.621 * Looking for test storage... 00:12:32.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:32.621 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:32.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.622 --rc genhtml_branch_coverage=1 00:12:32.622 --rc genhtml_function_coverage=1 00:12:32.622 --rc genhtml_legend=1 00:12:32.622 --rc geninfo_all_blocks=1 00:12:32.622 --rc geninfo_unexecuted_blocks=1 00:12:32.622 00:12:32.622 ' 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:32.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.622 --rc genhtml_branch_coverage=1 00:12:32.622 --rc genhtml_function_coverage=1 00:12:32.622 --rc genhtml_legend=1 00:12:32.622 --rc geninfo_all_blocks=1 00:12:32.622 --rc geninfo_unexecuted_blocks=1 00:12:32.622 00:12:32.622 ' 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:32.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.622 --rc genhtml_branch_coverage=1 00:12:32.622 --rc genhtml_function_coverage=1 00:12:32.622 --rc genhtml_legend=1 00:12:32.622 --rc geninfo_all_blocks=1 00:12:32.622 --rc geninfo_unexecuted_blocks=1 00:12:32.622 00:12:32.622 ' 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:32.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.622 --rc genhtml_branch_coverage=1 00:12:32.622 --rc genhtml_function_coverage=1 00:12:32.622 --rc genhtml_legend=1 00:12:32.622 --rc geninfo_all_blocks=1 00:12:32.622 --rc geninfo_unexecuted_blocks=1 00:12:32.622 00:12:32.622 ' 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.622 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.884 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.030 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:41.031 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:41.031 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:41.031 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:41.031 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:12:41.031 00:12:41.031 --- 10.0.0.2 ping statistics --- 00:12:41.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.031 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:12:41.031 00:12:41.031 --- 10.0.0.1 ping statistics --- 00:12:41.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.031 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2680264 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2680264 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2680264 ']' 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.031 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:41.032 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.032 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:41.032 06:23:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.032 [2024-11-20 06:23:00.501561] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:12:41.032 [2024-11-20 06:23:00.501625] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.032 [2024-11-20 06:23:00.599584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.032 [2024-11-20 06:23:00.650673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.032 [2024-11-20 06:23:00.650720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.032 [2024-11-20 06:23:00.650729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.032 [2024-11-20 06:23:00.650742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.032 [2024-11-20 06:23:00.650748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.032 [2024-11-20 06:23:00.651508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.293 [2024-11-20 06:23:01.364540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.293 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.294 [2024-11-20 06:23:01.388799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.294 malloc0 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:41.294 { 00:12:41.294 "params": { 00:12:41.294 "name": "Nvme$subsystem", 00:12:41.294 "trtype": "$TEST_TRANSPORT", 00:12:41.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.294 "adrfam": "ipv4", 00:12:41.294 "trsvcid": "$NVMF_PORT", 00:12:41.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.294 "hdgst": ${hdgst:-false}, 00:12:41.294 "ddgst": ${ddgst:-false} 00:12:41.294 }, 00:12:41.294 "method": "bdev_nvme_attach_controller" 00:12:41.294 } 00:12:41.294 EOF 00:12:41.294 )") 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:41.294 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:41.294 "params": { 00:12:41.294 "name": "Nvme1", 00:12:41.294 "trtype": "tcp", 00:12:41.294 "traddr": "10.0.0.2", 00:12:41.294 "adrfam": "ipv4", 00:12:41.294 "trsvcid": "4420", 00:12:41.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.294 "hdgst": false, 00:12:41.294 "ddgst": false 00:12:41.294 }, 00:12:41.294 "method": "bdev_nvme_attach_controller" 00:12:41.294 }' 00:12:41.294 [2024-11-20 06:23:01.491270] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:12:41.294 [2024-11-20 06:23:01.491333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680591 ] 00:12:41.556 [2024-11-20 06:23:01.582710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.556 [2024-11-20 06:23:01.635320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.556 Running I/O for 10 seconds... 00:12:43.883 6341.00 IOPS, 49.54 MiB/s [2024-11-20T05:23:05.103Z] 7412.00 IOPS, 57.91 MiB/s [2024-11-20T05:23:06.043Z] 8171.33 IOPS, 63.84 MiB/s [2024-11-20T05:23:06.985Z] 8550.75 IOPS, 66.80 MiB/s [2024-11-20T05:23:07.931Z] 8780.20 IOPS, 68.60 MiB/s [2024-11-20T05:23:08.962Z] 8933.50 IOPS, 69.79 MiB/s [2024-11-20T05:23:09.904Z] 9042.43 IOPS, 70.64 MiB/s [2024-11-20T05:23:10.843Z] 9128.00 IOPS, 71.31 MiB/s [2024-11-20T05:23:12.227Z] 9188.89 IOPS, 71.79 MiB/s [2024-11-20T05:23:12.227Z] 9242.10 IOPS, 72.20 MiB/s 00:12:51.948 Latency(us) 00:12:51.948 [2024-11-20T05:23:12.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.948 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:51.948 Verification LBA range: start 0x0 length 0x1000 00:12:51.948 Nvme1n1 : 10.01 9242.76 72.21 0.00 0.00 13800.45 494.93 28398.93 00:12:51.948 [2024-11-20T05:23:12.227Z] =================================================================================================================== 00:12:51.948 [2024-11-20T05:23:12.227Z] Total : 9242.76 72.21 0.00 0.00 13800.45 494.93 28398.93 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2682611 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:51.948 { 00:12:51.948 "params": { 00:12:51.948 "name": "Nvme$subsystem", 00:12:51.948 "trtype": "$TEST_TRANSPORT", 00:12:51.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.948 "adrfam": "ipv4", 00:12:51.948 "trsvcid": "$NVMF_PORT", 00:12:51.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.948 "hdgst": ${hdgst:-false}, 00:12:51.948 "ddgst": ${ddgst:-false} 00:12:51.948 }, 00:12:51.948 "method": "bdev_nvme_attach_controller" 00:12:51.948 } 00:12:51.948 EOF 00:12:51.948 )") 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:51.948 [2024-11-20 06:23:11.947091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.948 [2024-11-20 06:23:11.947120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:51.948 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:51.948 "params": { 00:12:51.948 "name": "Nvme1", 00:12:51.948 "trtype": "tcp", 00:12:51.948 "traddr": "10.0.0.2", 00:12:51.948 "adrfam": "ipv4", 00:12:51.948 "trsvcid": "4420", 00:12:51.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.948 "hdgst": false, 00:12:51.948 "ddgst": false 00:12:51.948 }, 00:12:51.948 "method": "bdev_nvme_attach_controller" 00:12:51.948 }' 00:12:51.948 [2024-11-20 06:23:11.959093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.948 [2024-11-20 06:23:11.959103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.948 [2024-11-20 06:23:11.971121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.948 [2024-11-20 06:23:11.971129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.948 [2024-11-20 06:23:11.983152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.948 [2024-11-20 06:23:11.983163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.948 [2024-11-20 06:23:11.989512] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:12:51.948 [2024-11-20 06:23:11.989559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682611 ] 00:12:51.948 [2024-11-20 06:23:11.995185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.948 [2024-11-20 06:23:11.995193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.948 [2024-11-20 06:23:12.007217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.948 [2024-11-20 06:23:12.007225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.948 [2024-11-20 06:23:12.019247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.948 [2024-11-20 06:23:12.019254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.948 [2024-11-20 06:23:12.031279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.031286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.043310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.043316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.055340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.055346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.067372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.067379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.070194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.949 [2024-11-20 06:23:12.079403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.079416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.091433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.091441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.099642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.949 [2024-11-20 06:23:12.103466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.103473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.115500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.115512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.127531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.127544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.139560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.139570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.151592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.151600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.163620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.163628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.175663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.175681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.187691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.187701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.199720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.199730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.211751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.211761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.949 [2024-11-20 06:23:12.223784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.949 [2024-11-20 06:23:12.223793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.209 [2024-11-20 06:23:12.274602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.209 [2024-11-20 06:23:12.274617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.209 [2024-11-20 06:23:12.283943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.283952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 Running I/O for 5 seconds... 00:12:52.210 [2024-11-20 06:23:12.299621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.299637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.312208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.312223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.325471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.325487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.338060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.338075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.350977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.350993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.364014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.364030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.377829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.377844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.390873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.390889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.404165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.404181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.416908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.416923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.430428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.430443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.443197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.443212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.455639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.455654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.468552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.468567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.210 [2024-11-20 06:23:12.482198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.210 [2024-11-20 06:23:12.482213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.495225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.495240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.508494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.508509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.521194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.521209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.534328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.534344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.547674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.547689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.560732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.560747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.573949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.573963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.586637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.586652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.599934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.599949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.613455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.613471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.627058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.627073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.640543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.640558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.654010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.654025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.666941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.666956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.680453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.680467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.693867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.693883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.706489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.706504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.719576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.719591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.732469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.732484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.470 [2024-11-20 06:23:12.745664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.470 [2024-11-20 06:23:12.745678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.731 [2024-11-20 06:23:12.759326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.759341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.772750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.772764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.785689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.785704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.799112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.799127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.812657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.812672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.826212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.826226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.838999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.839014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.851281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.851296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.863714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.863729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.877174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.877189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.889758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.889773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.902354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.902369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.915860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.915875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.929303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.929318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.941938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.941953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.954588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.954604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.968148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.968168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.980597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.980613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:12.993997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:12.994013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.732 [2024-11-20 06:23:13.007651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.732 [2024-11-20 06:23:13.007666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.020472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.020487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.033786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.033801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.047359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.047374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.061017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.061031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.073603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.073617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.087289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.087310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.100225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.100240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.113741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.113756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.127029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.127044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.140125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.140139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.153375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.153389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.166043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.166057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.179070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.179085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.192316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.192330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.205661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.205676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.219099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.219113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.232736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.232750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.245441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.245455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.992 [2024-11-20 06:23:13.257955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.992 [2024-11-20 06:23:13.257969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.271306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.271321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.284011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.284026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 19028.00 IOPS, 148.66 MiB/s [2024-11-20T05:23:13.532Z] [2024-11-20 06:23:13.296748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.296762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.309756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.309771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.323424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.323439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.336089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.336107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.348883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.348897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.362049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.362064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.375313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.375328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.388426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.388440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.401536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.401550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.414122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.414137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.427066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.427081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.440587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.440602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.454060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.454074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.467504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.467518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.480295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.480309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.492775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.492790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.506487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.506501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.253 [2024-11-20 06:23:13.519793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.253 [2024-11-20 06:23:13.519807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.533497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.533512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.546266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.546281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.559386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.559400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.572271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.572285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.584902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.584920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.597353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.597367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.609736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.609751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.622924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.622939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.636519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.636534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.649815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.649829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.662992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.663006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.675856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.675870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.689361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.689376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.702581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.702596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.715179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.715194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.728523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.728538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.742039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.742054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.755306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.755320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.768260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.768275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.514 [2024-11-20 06:23:13.781109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.514 [2024-11-20 06:23:13.781123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.794482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.794496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.807882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.807896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.821002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.821017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.833900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.833915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.846615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.846629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.859866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.859881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.873419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.873433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.885933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.885947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.899338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.899352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.912523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.912538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.925832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.925847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.938651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.938666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.951340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.951354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.963874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.963889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.976735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.976750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:13.989626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:13.989640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:14.003467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:14.003482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:14.016911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:14.016927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:14.029996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:14.030011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.775 [2024-11-20 06:23:14.043040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.775 [2024-11-20 06:23:14.043055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.035 [2024-11-20 06:23:14.055797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.035 [2024-11-20 06:23:14.055812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.035 [2024-11-20 06:23:14.068327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.035 [2024-11-20 06:23:14.068342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.035 [2024-11-20 06:23:14.081318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.035 [2024-11-20 06:23:14.081333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.035 [2024-11-20 06:23:14.093935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.035 [2024-11-20 06:23:14.093950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.106428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.106443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.119275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.119289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.131955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.131969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.144796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.144812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.157146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.157165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.170297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.170312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.183708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.183722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.197164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.197179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.210762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.210777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.223500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.223515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.235884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.235899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.248238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.248252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.261130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.261145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.274463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.274478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 [2024-11-20 06:23:14.288003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.288018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.036 19136.00 IOPS, 149.50 MiB/s [2024-11-20T05:23:14.315Z] [2024-11-20 06:23:14.301180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.036 [2024-11-20 06:23:14.301195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.314465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.314485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.327419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.327434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.339970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.339985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.352901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.352916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.366435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.366450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.379632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.379649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.393021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.393035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.406442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.406457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.419278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.419293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.432080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.432094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.444811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.444826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.458083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.458098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.470791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.470805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.483503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.483518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.495939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.495954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.508485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.508499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.520893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.520908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.534221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.534236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.547829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.547844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.297 [2024-11-20 06:23:14.561232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.297 [2024-11-20 06:23:14.561251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.574992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.575006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.588342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.588357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.601885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.601901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.614577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.614592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.627966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.627981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.641117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.641132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.654262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.654277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.667068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.667083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.680079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.680094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.693259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.693273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.706287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.706302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.719747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.719762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.733370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.733385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.747128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.747142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.760430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.760444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.773980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.773994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.787554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.787568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.800411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.558 [2024-11-20 06:23:14.800425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.558 [2024-11-20 06:23:14.813692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.559 [2024-11-20 06:23:14.813710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.559 [2024-11-20 06:23:14.826027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.559 [2024-11-20 06:23:14.826042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.838975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.838990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.851830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.851844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.865223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.865238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.877884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.877898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.890988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.891002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.904401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.904416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.916930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.916944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.929299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.929314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.942895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.942910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.956457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.956471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.969195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.969209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.982258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.982272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:14.995463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:14.995477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:15.008910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:15.008924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:15.022440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:15.022455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:15.035435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:15.035450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:15.048084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:15.048099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:15.060839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:15.060858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:15.074550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:15.074564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.819 [2024-11-20 06:23:15.087900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.819 [2024-11-20 06:23:15.087914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.080 [2024-11-20 06:23:15.101180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.080 [2024-11-20 06:23:15.101195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.080 [2024-11-20 06:23:15.114397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.080 [2024-11-20 06:23:15.114412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.080 [2024-11-20 06:23:15.127597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.080 [2024-11-20 06:23:15.127611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.080 [2024-11-20 06:23:15.140848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.080 [2024-11-20 06:23:15.140862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.080 [2024-11-20 06:23:15.154179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.080 [2024-11-20 06:23:15.154193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.080 [2024-11-20 06:23:15.167748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.080 [2024-11-20 06:23:15.167763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.080 [2024-11-20 06:23:15.181358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.080 [2024-11-20 06:23:15.181372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.080 [2024-11-20 06:23:15.194289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.080 [2024-11-20 06:23:15.194304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.080 [2024-11-20 06:23:15.207701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.080 [2024-11-20 06:23:15.207715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-11-20 06:23:15.220579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.220593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-11-20 06:23:15.234227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.234241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-11-20 06:23:15.246631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.246645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-11-20 06:23:15.259519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.259533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-11-20 06:23:15.272870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.272885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-11-20 06:23:15.285510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.285524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 19201.67 IOPS, 150.01 MiB/s [2024-11-20T05:23:15.360Z] [2024-11-20 06:23:15.297742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.297756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-11-20 06:23:15.310815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.310829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-11-20 06:23:15.324028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.324042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-11-20 06:23:15.337355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.337369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-11-20 06:23:15.350665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-11-20 06:23:15.350679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.363743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.363758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.377182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.377196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.390735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.390749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.403869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.403883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.417203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.417217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.430112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.430126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.443359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.443373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.455700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.455715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.469171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.469185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.482235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.482250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.495176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.495192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.508405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.508419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.521487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.521502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.534991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.535005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.548050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.548064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.561089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.561104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.574651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.574666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.588125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.588140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.601494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.601508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.342 [2024-11-20 06:23:15.615260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.342 [2024-11-20 06:23:15.615274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.628544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.628559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.641819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.641834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.654943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.654957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.668649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.668664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.681794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.681808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.694914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.694929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.708099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.708114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.721537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.721552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.735202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.735217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.748534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.748549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.762093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.762108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.775588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.775603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.789087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.789102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.802518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.802533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.815143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.815162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.827459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.827474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.840004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.840018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.853667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.853681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.603 [2024-11-20 06:23:15.867292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.603 [2024-11-20 06:23:15.867307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:15.880651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:15.880666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:15.894126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:15.894141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:15.907435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:15.907450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:15.921134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:15.921149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:15.934466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:15.934481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:15.947972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:15.947987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:15.960677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:15.960692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:15.973351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:15.973366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:15.985638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:15.985653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:15.999398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:15.999413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:16.012193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:16.012208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:16.025881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:16.025896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:16.038838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:16.038854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:16.051738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:16.051757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:16.064818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:16.064833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:16.077644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:16.077660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:16.090595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:16.090610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:16.103229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:16.103244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:16.115550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:16.115565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.864 [2024-11-20 06:23:16.128859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.864 [2024-11-20 06:23:16.128875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.142259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.142274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.155102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.155117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.168728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.168743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.182200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.182215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.195600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.195615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.208272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.208287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.220861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.220876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.233473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.233488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.247173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.247188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.260531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.260546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.273492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.273507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.286921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.286935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 19203.75 IOPS, 150.03 MiB/s [2024-11-20T05:23:16.404Z] [2024-11-20 06:23:16.300631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.300650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.314506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.314521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.327088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.327103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.339373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.339388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.352121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.352136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.365523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.365537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.378133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.378148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.125 [2024-11-20 06:23:16.391799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.125 [2024-11-20 06:23:16.391814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.386 [2024-11-20 06:23:16.405439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.386 [2024-11-20 06:23:16.405453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.386 [2024-11-20 06:23:16.418294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.386 [2024-11-20 06:23:16.418309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.386 [2024-11-20 06:23:16.431881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.386 [2024-11-20 06:23:16.431895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.386 [2024-11-20 06:23:16.444866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.386 [2024-11-20 06:23:16.444881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.457987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.458001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.471369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.471383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.484279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.484293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.497588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.497603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.511020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.511035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.523742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.523757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.537543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.537557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.550916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.550934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.563974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.563988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.577459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.577473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.590146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.590167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.602852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.602866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.615362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.615377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.628380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.628394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.641734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.641748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.387 [2024-11-20 06:23:16.655258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.387 [2024-11-20 06:23:16.655273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.667863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.667879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.681448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.681463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.694948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.694962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.708492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.708506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.721433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.721447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.734110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.734125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.746710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.746725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.760044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.760058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.773631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.773645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.786409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.786423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.799324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.799339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.812062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.812077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.824451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.824466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.837500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.837514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.851048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.851063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.863651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.863666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.876544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.876558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.889560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.889574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.902292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.902307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.648 [2024-11-20 06:23:16.914696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.648 [2024-11-20 06:23:16.914710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:16.927392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.909 [2024-11-20 06:23:16.927407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:16.940171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.909 [2024-11-20 06:23:16.940185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:16.953452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.909 [2024-11-20 06:23:16.953466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:16.966982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.909 [2024-11-20 06:23:16.966996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:16.980322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.909 [2024-11-20 06:23:16.980336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:16.993126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.909 [2024-11-20 06:23:16.993140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:17.006477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.909 [2024-11-20 06:23:17.006491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:17.019510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.909 [2024-11-20 06:23:17.019525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:17.032804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.909 [2024-11-20 06:23:17.032818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:17.045310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.909 [2024-11-20 06:23:17.045324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.909 [2024-11-20 06:23:17.058623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.910 [2024-11-20 06:23:17.058637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.910 [2024-11-20 06:23:17.071522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.910 [2024-11-20 06:23:17.071536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.910 [2024-11-20 06:23:17.084834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.910 [2024-11-20 06:23:17.084849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.910 [2024-11-20 06:23:17.097977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.910 [2024-11-20 06:23:17.097992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.910 [2024-11-20 06:23:17.110660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.910 [2024-11-20 06:23:17.110675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.910 [2024-11-20 06:23:17.123778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.910 [2024-11-20 06:23:17.123793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.910 [2024-11-20 06:23:17.137184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.910 [2024-11-20 06:23:17.137199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.910 [2024-11-20 06:23:17.150771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.910 [2024-11-20 06:23:17.150785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.910 [2024-11-20 06:23:17.163755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.910 [2024-11-20 06:23:17.163769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.910 [2024-11-20 06:23:17.177269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.910 [2024-11-20 06:23:17.177284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.189813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.189828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.202125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.202139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.215324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.215339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.228502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.228516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.241546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.241561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.255082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.255096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.268492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.268506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.281624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.281639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.295092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.295106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 19202.80 IOPS, 150.02 MiB/s 00:12:57.170 Latency(us) 00:12:57.170 [2024-11-20T05:23:17.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.170 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:57.170 Nvme1n1 : 5.00 19213.79 150.11 0.00 0.00 6657.40 3085.65 18131.63 00:12:57.170 [2024-11-20T05:23:17.449Z] =================================================================================================================== 00:12:57.170 [2024-11-20T05:23:17.449Z] Total : 19213.79 150.11 0.00 0.00 6657.40 3085.65 18131.63 00:12:57.170 [2024-11-20 06:23:17.304965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.304978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.316994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.317005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.329025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.329038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.341056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.341069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.353084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.353095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.365113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.365122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.377142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.377151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.389178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.389188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 [2024-11-20 06:23:17.401208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.170 [2024-11-20 06:23:17.401216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2682611) - No such process 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2682611 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.170 delay0 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.170 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:57.431 [2024-11-20 06:23:17.568410] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:05.563 Initializing NVMe Controllers 00:13:05.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:05.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:05.563 Initialization complete. Launching workers. 00:13:05.563 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 221, failed: 40925 00:13:05.563 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 40997, failed to submit 149 00:13:05.563 success 40940, unsuccessful 57, failed 0 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.563 rmmod nvme_tcp 00:13:05.563 rmmod nvme_fabrics 00:13:05.563 rmmod nvme_keyring 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2680264 ']' 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2680264 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2680264 ']' 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2680264 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2680264 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2680264' 00:13:05.563 killing process with pid 2680264 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2680264 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2680264 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.563 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.564 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.947 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:06.947 00:13:06.947 real 0m34.367s 00:13:06.947 user 0m45.060s 00:13:06.947 sys 0m11.932s 00:13:06.947 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:06.947 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:06.947 ************************************ 00:13:06.947 END TEST nvmf_zcopy 00:13:06.947 ************************************ 00:13:06.947 06:23:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:06.947 06:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:06.947 06:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:06.947 06:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:06.947 ************************************ 00:13:06.947 START TEST nvmf_nmic 00:13:06.947 ************************************ 00:13:06.947 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:07.209 * Looking for test storage... 00:13:07.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:07.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.209 --rc genhtml_branch_coverage=1 00:13:07.209 --rc genhtml_function_coverage=1 00:13:07.209 --rc genhtml_legend=1 00:13:07.209 --rc geninfo_all_blocks=1 00:13:07.209 --rc geninfo_unexecuted_blocks=1 00:13:07.209 00:13:07.209 ' 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:07.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.209 --rc genhtml_branch_coverage=1 00:13:07.209 --rc genhtml_function_coverage=1 00:13:07.209 --rc genhtml_legend=1 00:13:07.209 --rc geninfo_all_blocks=1 00:13:07.209 --rc geninfo_unexecuted_blocks=1 00:13:07.209 00:13:07.209 ' 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:07.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.209 --rc genhtml_branch_coverage=1 00:13:07.209 --rc genhtml_function_coverage=1 00:13:07.209 --rc genhtml_legend=1 00:13:07.209 --rc geninfo_all_blocks=1 00:13:07.209 --rc geninfo_unexecuted_blocks=1 00:13:07.209 00:13:07.209 ' 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:07.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.209 --rc genhtml_branch_coverage=1 00:13:07.209 --rc genhtml_function_coverage=1 00:13:07.209 --rc genhtml_legend=1 00:13:07.209 --rc geninfo_all_blocks=1 00:13:07.209 --rc geninfo_unexecuted_blocks=1 00:13:07.209 00:13:07.209 ' 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.209 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.222 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.223 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.363 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:15.364 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:15.364 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:15.364 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:15.364 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:13:15.364 00:13:15.364 --- 10.0.0.2 ping statistics --- 00:13:15.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.364 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:13:15.364 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:13:15.364 00:13:15.364 --- 10.0.0.1 ping statistics --- 00:13:15.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.365 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2689380 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2689380 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2689380 ']' 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:15.365 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.365 [2024-11-20 06:23:34.896507] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:13:15.365 [2024-11-20 06:23:34.896570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.365 [2024-11-20 06:23:34.998119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.365 [2024-11-20 06:23:35.053266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.365 [2024-11-20 06:23:35.053321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.365 [2024-11-20 06:23:35.053329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.365 [2024-11-20 06:23:35.053337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.365 [2024-11-20 06:23:35.053343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.365 [2024-11-20 06:23:35.055776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.365 [2024-11-20 06:23:35.055935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.365 [2024-11-20 06:23:35.056097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.365 [2024-11-20 06:23:35.056097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 [2024-11-20 06:23:35.774142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 Malloc0 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 [2024-11-20 06:23:35.851883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:15.627 test case1: single bdev can't be used in multiple subsystems 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 [2024-11-20 06:23:35.887653] bdev.c:8318:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:15.627 [2024-11-20 06:23:35.887680] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:15.627 [2024-11-20 06:23:35.887689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.627 request: 00:13:15.627 { 00:13:15.627 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:15.627 "namespace": { 00:13:15.627 "bdev_name": "Malloc0", 00:13:15.627 "no_auto_visible": false 00:13:15.627 }, 00:13:15.627 "method": "nvmf_subsystem_add_ns", 00:13:15.627 "req_id": 1 00:13:15.627 } 00:13:15.627 Got JSON-RPC error response 00:13:15.627 response: 00:13:15.627 { 00:13:15.627 "code": -32602, 00:13:15.627 "message": "Invalid parameters" 00:13:15.627 } 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:15.627 Adding namespace failed - expected result. 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:15.627 test case2: host connect to nvmf target in multiple paths 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.627 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 [2024-11-20 06:23:35.899860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:15.888 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.888 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.275 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:19.188 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.188 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:13:19.188 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.188 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:19.188 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:13:21.105 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:21.105 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:21.105 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.105 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:21.105 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.105 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:13:21.105 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:21.105 [global] 00:13:21.105 thread=1 00:13:21.105 invalidate=1 00:13:21.105 rw=write 00:13:21.105 time_based=1 00:13:21.105 runtime=1 00:13:21.105 ioengine=libaio 00:13:21.105 direct=1 00:13:21.105 bs=4096 00:13:21.105 iodepth=1 00:13:21.105 norandommap=0 00:13:21.105 numjobs=1 00:13:21.105 00:13:21.105 verify_dump=1 00:13:21.105 verify_backlog=512 00:13:21.105 verify_state_save=0 00:13:21.105 do_verify=1 00:13:21.105 verify=crc32c-intel 00:13:21.105 [job0] 00:13:21.105 filename=/dev/nvme0n1 00:13:21.105 Could not set queue depth (nvme0n1) 00:13:21.105 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.105 fio-3.35 00:13:21.105 Starting 1 thread 00:13:22.491 00:13:22.492 job0: (groupid=0, jobs=1): err= 0: pid=2690863: Wed Nov 20 06:23:42 2024 00:13:22.492 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:22.492 slat (nsec): min=24640, max=59491, avg=25784.36, stdev=2983.17 00:13:22.492 clat (usec): min=766, max=1270, avg=989.40, stdev=63.96 00:13:22.492 lat (usec): min=792, max=1295, avg=1015.18, stdev=63.79 00:13:22.492 clat percentiles (usec): 00:13:22.492 | 1.00th=[ 799], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 947], 00:13:22.492 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1004], 00:13:22.492 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1074], 00:13:22.492 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1270], 99.95th=[ 1270], 00:13:22.492 | 99.99th=[ 1270] 00:13:22.492 write: IOPS=691, BW=2765KiB/s (2832kB/s)(2768KiB/1001msec); 0 zone resets 00:13:22.492 slat (usec): min=10, max=27868, avg=73.11, stdev=1058.19 00:13:22.492 clat (usec): min=229, max=870, avg=606.82, stdev=115.63 00:13:22.492 lat (usec): min=240, max=28634, avg=679.93, stdev=1070.63 00:13:22.492 clat percentiles (usec): 00:13:22.492 | 1.00th=[ 338], 5.00th=[ 424], 10.00th=[ 449], 20.00th=[ 494], 00:13:22.492 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 652], 00:13:22.492 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 766], 00:13:22.492 | 99.00th=[ 807], 99.50th=[ 824], 99.90th=[ 873], 99.95th=[ 873], 00:13:22.492 | 99.99th=[ 873] 00:13:22.492 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:22.492 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:22.492 lat (usec) : 250=0.08%, 500=11.96%, 750=40.37%, 1000=27.99% 00:13:22.492 lat (msec) : 2=19.60% 00:13:22.492 cpu : usr=1.80%, sys=3.80%, ctx=1206, majf=0, minf=1 00:13:22.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.492 issued rwts: total=512,692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:22.492 00:13:22.492 Run status group 0 (all jobs): 00:13:22.492 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:13:22.492 WRITE: bw=2765KiB/s (2832kB/s), 2765KiB/s-2765KiB/s (2832kB/s-2832kB/s), io=2768KiB (2834kB), run=1001-1001msec 00:13:22.492 00:13:22.492 Disk stats (read/write): 00:13:22.492 nvme0n1: ios=537/537, merge=0/0, ticks=1449/320, in_queue=1769, util=98.80% 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:22.492 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:22.492 rmmod nvme_tcp 00:13:22.492 rmmod nvme_fabrics 00:13:22.492 rmmod nvme_keyring 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2689380 ']' 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2689380 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2689380 ']' 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2689380 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2689380 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2689380' 00:13:22.753 killing process with pid 2689380 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2689380 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2689380 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.753 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:25.300 00:13:25.300 real 0m17.920s 00:13:25.300 user 0m48.380s 00:13:25.300 sys 0m6.578s 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.300 ************************************ 00:13:25.300 END TEST nvmf_nmic 00:13:25.300 ************************************ 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:25.300 ************************************ 00:13:25.300 START TEST nvmf_fio_target 00:13:25.300 ************************************ 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:25.300 * Looking for test storage... 00:13:25.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:25.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.300 --rc genhtml_branch_coverage=1 00:13:25.300 --rc genhtml_function_coverage=1 00:13:25.300 --rc genhtml_legend=1 00:13:25.300 --rc geninfo_all_blocks=1 00:13:25.300 --rc geninfo_unexecuted_blocks=1 00:13:25.300 00:13:25.300 ' 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:25.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.300 --rc genhtml_branch_coverage=1 00:13:25.300 --rc genhtml_function_coverage=1 00:13:25.300 --rc genhtml_legend=1 00:13:25.300 --rc geninfo_all_blocks=1 00:13:25.300 --rc geninfo_unexecuted_blocks=1 00:13:25.300 00:13:25.300 ' 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:25.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.300 --rc genhtml_branch_coverage=1 00:13:25.300 --rc genhtml_function_coverage=1 00:13:25.300 --rc genhtml_legend=1 00:13:25.300 --rc geninfo_all_blocks=1 00:13:25.300 --rc geninfo_unexecuted_blocks=1 00:13:25.300 00:13:25.300 ' 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:25.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.300 --rc genhtml_branch_coverage=1 00:13:25.300 --rc genhtml_function_coverage=1 00:13:25.300 --rc genhtml_legend=1 00:13:25.300 --rc geninfo_all_blocks=1 00:13:25.300 --rc geninfo_unexecuted_blocks=1 00:13:25.300 00:13:25.300 ' 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.300 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:25.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:25.301 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:33.442 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:33.442 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:33.442 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:33.442 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:33.442 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:33.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:13:33.443 00:13:33.443 --- 10.0.0.2 ping statistics --- 00:13:33.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.443 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:13:33.443 00:13:33.443 --- 10.0.0.1 ping statistics --- 00:13:33.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.443 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2695533 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2695533 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2695533 ']' 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:33.443 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.443 [2024-11-20 06:23:52.967228] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:13:33.443 [2024-11-20 06:23:52.967290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.443 [2024-11-20 06:23:53.068053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.443 [2024-11-20 06:23:53.121183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.443 [2024-11-20 06:23:53.121237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.443 [2024-11-20 06:23:53.121246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.443 [2024-11-20 06:23:53.121253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.443 [2024-11-20 06:23:53.121260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.443 [2024-11-20 06:23:53.123256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.443 [2024-11-20 06:23:53.123420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.443 [2024-11-20 06:23:53.123584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.443 [2024-11-20 06:23:53.123584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.705 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:33.705 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:13:33.705 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:33.705 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:33.705 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.705 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.705 06:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:33.965 [2024-11-20 06:23:54.006300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.965 06:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.226 06:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:34.226 06:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.226 06:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:34.226 06:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.489 06:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:34.489 06:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.751 06:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:34.751 06:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:35.012 06:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:35.273 06:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:35.273 06:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:35.273 06:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:35.273 06:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:35.534 06:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:35.534 06:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:35.795 06:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:35.795 06:23:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:35.795 06:23:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.055 06:23:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:36.055 06:23:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.315 06:23:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.315 [2024-11-20 06:23:56.575394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.576 06:23:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:36.576 06:23:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:36.836 06:23:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.216 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:38.216 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:13:38.216 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.216 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:13:38.217 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:13:38.217 06:23:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:13:40.761 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:40.761 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:40.761 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.761 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:13:40.761 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.761 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:13:40.761 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:40.761 [global] 00:13:40.761 thread=1 00:13:40.761 invalidate=1 00:13:40.761 rw=write 00:13:40.761 time_based=1 00:13:40.761 runtime=1 00:13:40.761 ioengine=libaio 00:13:40.761 direct=1 00:13:40.761 bs=4096 00:13:40.761 iodepth=1 00:13:40.761 norandommap=0 00:13:40.761 numjobs=1 00:13:40.761 00:13:40.761 verify_dump=1 00:13:40.761 verify_backlog=512 00:13:40.761 verify_state_save=0 00:13:40.761 do_verify=1 00:13:40.761 verify=crc32c-intel 00:13:40.761 [job0] 00:13:40.761 filename=/dev/nvme0n1 00:13:40.761 [job1] 00:13:40.761 filename=/dev/nvme0n2 00:13:40.761 [job2] 00:13:40.761 filename=/dev/nvme0n3 00:13:40.761 [job3] 00:13:40.761 filename=/dev/nvme0n4 00:13:40.761 Could not set queue depth (nvme0n1) 00:13:40.761 Could not set queue depth (nvme0n2) 00:13:40.761 Could not set queue depth (nvme0n3) 00:13:40.761 Could not set queue depth (nvme0n4) 00:13:40.761 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.761 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.761 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.761 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.761 fio-3.35 00:13:40.761 Starting 4 threads 00:13:42.148 00:13:42.148 job0: (groupid=0, jobs=1): err= 0: pid=2697273: Wed Nov 20 06:24:02 2024 00:13:42.148 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:42.148 slat (nsec): min=7122, max=45834, avg=25576.59, stdev=4823.78 00:13:42.148 clat (usec): min=399, max=41017, avg=893.46, stdev=1782.34 00:13:42.148 lat (usec): min=425, max=41044, avg=919.04, stdev=1782.42 00:13:42.148 clat percentiles (usec): 00:13:42.148 | 1.00th=[ 502], 5.00th=[ 562], 10.00th=[ 611], 20.00th=[ 693], 00:13:42.148 | 30.00th=[ 734], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 873], 00:13:42.148 | 70.00th=[ 922], 80.00th=[ 955], 90.00th=[ 988], 95.00th=[ 1012], 00:13:42.148 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[41157], 99.95th=[41157], 00:13:42.148 | 99.99th=[41157] 00:13:42.148 write: IOPS=926, BW=3704KiB/s (3793kB/s)(3708KiB/1001msec); 0 zone resets 00:13:42.148 slat (nsec): min=10038, max=54853, avg=31486.83, stdev=9371.10 00:13:42.148 clat (usec): min=148, max=985, avg=528.20, stdev=128.56 00:13:42.148 lat (usec): min=162, max=1020, avg=559.69, stdev=131.74 00:13:42.148 clat percentiles (usec): 00:13:42.148 | 1.00th=[ 245], 5.00th=[ 334], 10.00th=[ 363], 20.00th=[ 412], 00:13:42.148 | 30.00th=[ 461], 40.00th=[ 494], 50.00th=[ 529], 60.00th=[ 562], 00:13:42.148 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 685], 95.00th=[ 742], 00:13:42.148 | 99.00th=[ 840], 99.50th=[ 881], 99.90th=[ 988], 99.95th=[ 988], 00:13:42.148 | 99.99th=[ 988] 00:13:42.148 bw ( KiB/s): min= 4096, max= 4096, per=32.48%, avg=4096.00, stdev= 0.00, samples=1 00:13:42.148 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:42.148 lat (usec) : 250=1.04%, 500=26.48%, 750=45.80%, 1000=23.91% 00:13:42.148 lat (msec) : 2=2.71%, 50=0.07% 00:13:42.148 cpu : usr=2.00%, sys=4.50%, ctx=1441, majf=0, minf=1 00:13:42.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.148 issued rwts: total=512,927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.148 job1: (groupid=0, jobs=1): err= 0: pid=2697294: Wed Nov 20 06:24:02 2024 00:13:42.149 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:42.149 slat (nsec): min=8576, max=44252, avg=25347.76, stdev=3446.55 00:13:42.149 clat (usec): min=793, max=1561, avg=1120.89, stdev=126.76 00:13:42.149 lat (usec): min=818, max=1586, avg=1146.24, stdev=126.75 00:13:42.149 clat percentiles (usec): 00:13:42.149 | 1.00th=[ 832], 5.00th=[ 906], 10.00th=[ 947], 20.00th=[ 1012], 00:13:42.149 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1156], 00:13:42.149 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1287], 95.00th=[ 1336], 00:13:42.149 | 99.00th=[ 1418], 99.50th=[ 1467], 99.90th=[ 1565], 99.95th=[ 1565], 00:13:42.149 | 99.99th=[ 1565] 00:13:42.149 write: IOPS=745, BW=2981KiB/s (3053kB/s)(2984KiB/1001msec); 0 zone resets 00:13:42.149 slat (nsec): min=9656, max=57239, avg=29957.27, stdev=8749.33 00:13:42.149 clat (usec): min=149, max=926, avg=511.11, stdev=135.22 00:13:42.149 lat (usec): min=181, max=959, avg=541.07, stdev=137.29 00:13:42.149 clat percentiles (usec): 00:13:42.149 | 1.00th=[ 260], 5.00th=[ 318], 10.00th=[ 347], 20.00th=[ 396], 00:13:42.149 | 30.00th=[ 437], 40.00th=[ 465], 50.00th=[ 498], 60.00th=[ 537], 00:13:42.149 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 701], 95.00th=[ 766], 00:13:42.149 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 930], 99.95th=[ 930], 00:13:42.149 | 99.99th=[ 930] 00:13:42.149 bw ( KiB/s): min= 4096, max= 4096, per=32.48%, avg=4096.00, stdev= 0.00, samples=1 00:13:42.149 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:42.149 lat (usec) : 250=0.40%, 500=29.49%, 750=25.76%, 1000=11.29% 00:13:42.149 lat (msec) : 2=33.07% 00:13:42.149 cpu : usr=1.40%, sys=4.10%, ctx=1259, majf=0, minf=2 00:13:42.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.149 issued rwts: total=512,746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.149 job2: (groupid=0, jobs=1): err= 0: pid=2697314: Wed Nov 20 06:24:02 2024 00:13:42.149 read: IOPS=731, BW=2925KiB/s (2995kB/s)(2928KiB/1001msec) 00:13:42.149 slat (nsec): min=7463, max=63783, avg=25879.93, stdev=7694.83 00:13:42.149 clat (usec): min=148, max=1033, avg=679.32, stdev=122.64 00:13:42.149 lat (usec): min=175, max=1061, avg=705.20, stdev=123.20 00:13:42.149 clat percentiles (usec): 00:13:42.149 | 1.00th=[ 375], 5.00th=[ 474], 10.00th=[ 519], 20.00th=[ 578], 00:13:42.149 | 30.00th=[ 611], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 725], 00:13:42.149 | 70.00th=[ 750], 80.00th=[ 783], 90.00th=[ 832], 95.00th=[ 865], 00:13:42.149 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[ 1037], 99.95th=[ 1037], 00:13:42.149 | 99.99th=[ 1037] 00:13:42.149 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:42.149 slat (nsec): min=9843, max=64780, avg=30898.31, stdev=11266.51 00:13:42.149 clat (usec): min=93, max=928, avg=428.95, stdev=153.70 00:13:42.149 lat (usec): min=104, max=965, avg=459.84, stdev=157.64 00:13:42.149 clat percentiles (usec): 00:13:42.149 | 1.00th=[ 112], 5.00th=[ 155], 10.00th=[ 269], 20.00th=[ 314], 00:13:42.149 | 30.00th=[ 351], 40.00th=[ 379], 50.00th=[ 408], 60.00th=[ 449], 00:13:42.149 | 70.00th=[ 490], 80.00th=[ 553], 90.00th=[ 644], 95.00th=[ 725], 00:13:42.149 | 99.00th=[ 824], 99.50th=[ 848], 99.90th=[ 898], 99.95th=[ 930], 00:13:42.149 | 99.99th=[ 930] 00:13:42.149 bw ( KiB/s): min= 4096, max= 4096, per=32.48%, avg=4096.00, stdev= 0.00, samples=1 00:13:42.149 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:42.149 lat (usec) : 100=0.28%, 250=4.90%, 500=39.86%, 750=40.15%, 1000=14.69% 00:13:42.149 lat (msec) : 2=0.11% 00:13:42.149 cpu : usr=2.20%, sys=5.90%, ctx=1760, majf=0, minf=1 00:13:42.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.149 issued rwts: total=732,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.149 job3: (groupid=0, jobs=1): err= 0: pid=2697321: Wed Nov 20 06:24:02 2024 00:13:42.149 read: IOPS=19, BW=78.6KiB/s (80.5kB/s)(80.0KiB/1018msec) 00:13:42.149 slat (nsec): min=10481, max=41975, avg=28567.30, stdev=5185.67 00:13:42.149 clat (usec): min=1219, max=42001, avg=39756.19, stdev=9077.55 00:13:42.149 lat (usec): min=1230, max=42029, avg=39784.76, stdev=9081.78 00:13:42.149 clat percentiles (usec): 00:13:42.149 | 1.00th=[ 1221], 5.00th=[ 1221], 10.00th=[41157], 20.00th=[41157], 00:13:42.149 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:42.149 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:42.149 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:42.149 | 99.99th=[42206] 00:13:42.149 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:13:42.149 slat (usec): min=10, max=1427, avg=28.47, stdev=63.57 00:13:42.149 clat (usec): min=86, max=814, avg=399.58, stdev=157.75 00:13:42.149 lat (usec): min=97, max=1922, avg=428.05, stdev=179.17 00:13:42.149 clat percentiles (usec): 00:13:42.149 | 1.00th=[ 104], 5.00th=[ 126], 10.00th=[ 206], 20.00th=[ 277], 00:13:42.149 | 30.00th=[ 314], 40.00th=[ 338], 50.00th=[ 383], 60.00th=[ 449], 00:13:42.149 | 70.00th=[ 482], 80.00th=[ 537], 90.00th=[ 611], 95.00th=[ 693], 00:13:42.149 | 99.00th=[ 750], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:13:42.149 | 99.99th=[ 816] 00:13:42.149 bw ( KiB/s): min= 4096, max= 4096, per=32.48%, avg=4096.00, stdev= 0.00, samples=1 00:13:42.149 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:42.149 lat (usec) : 100=0.94%, 250=15.04%, 500=55.26%, 750=23.87%, 1000=1.13% 00:13:42.149 lat (msec) : 2=0.19%, 50=3.57% 00:13:42.149 cpu : usr=0.39%, sys=1.57%, ctx=534, majf=0, minf=1 00:13:42.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.149 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.149 00:13:42.149 Run status group 0 (all jobs): 00:13:42.149 READ: bw=6978KiB/s (7146kB/s), 78.6KiB/s-2925KiB/s (80.5kB/s-2995kB/s), io=7104KiB (7274kB), run=1001-1018msec 00:13:42.149 WRITE: bw=12.3MiB/s (12.9MB/s), 2012KiB/s-4092KiB/s (2060kB/s-4190kB/s), io=12.5MiB (13.1MB), run=1001-1018msec 00:13:42.149 00:13:42.149 Disk stats (read/write): 00:13:42.149 nvme0n1: ios=564/611, merge=0/0, ticks=802/308, in_queue=1110, util=96.39% 00:13:42.149 nvme0n2: ios=523/512, merge=0/0, ticks=578/228, in_queue=806, util=87.44% 00:13:42.149 nvme0n3: ios=570/980, merge=0/0, ticks=797/382, in_queue=1179, util=97.35% 00:13:42.149 nvme0n4: ios=72/512, merge=0/0, ticks=747/189, in_queue=936, util=97.42% 00:13:42.149 06:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:42.149 [global] 00:13:42.149 thread=1 00:13:42.149 invalidate=1 00:13:42.149 rw=randwrite 00:13:42.149 time_based=1 00:13:42.149 runtime=1 00:13:42.149 ioengine=libaio 00:13:42.149 direct=1 00:13:42.149 bs=4096 00:13:42.149 iodepth=1 00:13:42.149 norandommap=0 00:13:42.149 numjobs=1 00:13:42.149 00:13:42.149 verify_dump=1 00:13:42.149 verify_backlog=512 00:13:42.149 verify_state_save=0 00:13:42.149 do_verify=1 00:13:42.149 verify=crc32c-intel 00:13:42.149 [job0] 00:13:42.149 filename=/dev/nvme0n1 00:13:42.149 [job1] 00:13:42.149 filename=/dev/nvme0n2 00:13:42.149 [job2] 00:13:42.149 filename=/dev/nvme0n3 00:13:42.149 [job3] 00:13:42.149 filename=/dev/nvme0n4 00:13:42.149 Could not set queue depth (nvme0n1) 00:13:42.149 Could not set queue depth (nvme0n2) 00:13:42.149 Could not set queue depth (nvme0n3) 00:13:42.149 Could not set queue depth (nvme0n4) 00:13:42.411 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.411 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.411 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.411 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.411 fio-3.35 00:13:42.411 Starting 4 threads 00:13:43.799 00:13:43.799 job0: (groupid=0, jobs=1): err= 0: pid=2697854: Wed Nov 20 06:24:03 2024 00:13:43.799 read: IOPS=16, BW=67.6KiB/s (69.2kB/s)(68.0KiB/1006msec) 00:13:43.799 slat (nsec): min=7583, max=27530, avg=25063.59, stdev=6209.50 00:13:43.799 clat (usec): min=808, max=42022, avg=39234.91, stdev=9911.98 00:13:43.799 lat (usec): min=818, max=42050, avg=39259.98, stdev=9915.95 00:13:43.799 clat percentiles (usec): 00:13:43.799 | 1.00th=[ 807], 5.00th=[ 807], 10.00th=[41157], 20.00th=[41157], 00:13:43.799 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:13:43.799 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:43.799 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.799 | 99.99th=[42206] 00:13:43.799 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:13:43.799 slat (nsec): min=9284, max=55636, avg=30708.84, stdev=9970.47 00:13:43.799 clat (usec): min=288, max=909, avg=620.20, stdev=115.24 00:13:43.799 lat (usec): min=303, max=963, avg=650.91, stdev=119.78 00:13:43.799 clat percentiles (usec): 00:13:43.799 | 1.00th=[ 347], 5.00th=[ 404], 10.00th=[ 469], 20.00th=[ 515], 00:13:43.799 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:13:43.799 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 783], 00:13:43.799 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 914], 99.95th=[ 914], 00:13:43.799 | 99.99th=[ 914] 00:13:43.799 bw ( KiB/s): min= 4096, max= 4096, per=45.94%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.799 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.799 lat (usec) : 500=17.20%, 750=68.24%, 1000=11.53% 00:13:43.799 lat (msec) : 50=3.02% 00:13:43.799 cpu : usr=0.80%, sys=2.29%, ctx=532, majf=0, minf=1 00:13:43.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.799 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.799 job1: (groupid=0, jobs=1): err= 0: pid=2697864: Wed Nov 20 06:24:03 2024 00:13:43.799 read: IOPS=17, BW=70.8KiB/s (72.5kB/s)(72.0KiB/1017msec) 00:13:43.799 slat (nsec): min=26770, max=27436, avg=27173.17, stdev=171.38 00:13:43.799 clat (usec): min=1086, max=42090, avg=37324.46, stdev=13182.66 00:13:43.799 lat (usec): min=1114, max=42117, avg=37351.63, stdev=13182.66 00:13:43.799 clat percentiles (usec): 00:13:43.799 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[ 1106], 20.00th=[41157], 00:13:43.799 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:13:43.799 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:43.799 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.799 | 99.99th=[42206] 00:13:43.799 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:13:43.799 slat (nsec): min=9208, max=54523, avg=30212.03, stdev=9901.18 00:13:43.799 clat (usec): min=248, max=914, avg=631.92, stdev=115.61 00:13:43.799 lat (usec): min=258, max=947, avg=662.13, stdev=120.85 00:13:43.799 clat percentiles (usec): 00:13:43.799 | 1.00th=[ 330], 5.00th=[ 404], 10.00th=[ 474], 20.00th=[ 545], 00:13:43.799 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:13:43.799 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 791], 00:13:43.799 | 99.00th=[ 857], 99.50th=[ 865], 99.90th=[ 914], 99.95th=[ 914], 00:13:43.799 | 99.99th=[ 914] 00:13:43.799 bw ( KiB/s): min= 4096, max= 4096, per=45.94%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.799 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.799 lat (usec) : 250=0.19%, 500=13.40%, 750=70.00%, 1000=13.02% 00:13:43.799 lat (msec) : 2=0.38%, 50=3.02% 00:13:43.799 cpu : usr=1.87%, sys=1.18%, ctx=532, majf=0, minf=1 00:13:43.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.799 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.799 job2: (groupid=0, jobs=1): err= 0: pid=2697883: Wed Nov 20 06:24:03 2024 00:13:43.799 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:13:43.799 slat (nsec): min=26267, max=27795, avg=26899.44, stdev=384.72 00:13:43.799 clat (usec): min=1099, max=42054, avg=39342.33, stdev=9555.87 00:13:43.799 lat (usec): min=1125, max=42081, avg=39369.23, stdev=9556.01 00:13:43.799 clat percentiles (usec): 00:13:43.799 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41157], 00:13:43.799 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:13:43.799 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:43.799 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:43.799 | 99.99th=[42206] 00:13:43.799 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:13:43.799 slat (nsec): min=9925, max=68018, avg=30994.90, stdev=8995.91 00:13:43.799 clat (usec): min=271, max=908, avg=596.83, stdev=117.80 00:13:43.799 lat (usec): min=282, max=945, avg=627.82, stdev=120.92 00:13:43.799 clat percentiles (usec): 00:13:43.799 | 1.00th=[ 314], 5.00th=[ 392], 10.00th=[ 449], 20.00th=[ 502], 00:13:43.799 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:13:43.799 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 783], 00:13:43.799 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 906], 99.95th=[ 906], 00:13:43.799 | 99.99th=[ 906] 00:13:43.799 bw ( KiB/s): min= 4096, max= 4096, per=45.94%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.799 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.799 lat (usec) : 500=18.68%, 750=69.25%, 1000=8.68% 00:13:43.799 lat (msec) : 2=0.19%, 50=3.21% 00:13:43.799 cpu : usr=1.16%, sys=1.16%, ctx=531, majf=0, minf=1 00:13:43.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.799 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.799 job3: (groupid=0, jobs=1): err= 0: pid=2697890: Wed Nov 20 06:24:03 2024 00:13:43.799 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:43.799 slat (nsec): min=24282, max=31479, avg=25418.47, stdev=403.73 00:13:43.799 clat (usec): min=729, max=1114, avg=966.22, stdev=51.80 00:13:43.799 lat (usec): min=754, max=1139, avg=991.64, stdev=51.71 00:13:43.799 clat percentiles (usec): 00:13:43.799 | 1.00th=[ 807], 5.00th=[ 865], 10.00th=[ 889], 20.00th=[ 938], 00:13:43.799 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:13:43.799 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1020], 95.00th=[ 1045], 00:13:43.799 | 99.00th=[ 1090], 99.50th=[ 1090], 99.90th=[ 1123], 99.95th=[ 1123], 00:13:43.799 | 99.99th=[ 1123] 00:13:43.799 write: IOPS=772, BW=3089KiB/s (3163kB/s)(3092KiB/1001msec); 0 zone resets 00:13:43.799 slat (nsec): min=9577, max=61346, avg=28819.26, stdev=8629.65 00:13:43.799 clat (usec): min=239, max=1608, avg=595.68, stdev=123.27 00:13:43.799 lat (usec): min=258, max=1641, avg=624.50, stdev=125.97 00:13:43.799 clat percentiles (usec): 00:13:43.799 | 1.00th=[ 334], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 490], 00:13:43.799 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:13:43.799 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 783], 00:13:43.799 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 1614], 99.95th=[ 1614], 00:13:43.799 | 99.99th=[ 1614] 00:13:43.799 bw ( KiB/s): min= 4096, max= 4096, per=45.94%, avg=4096.00, stdev= 0.00, samples=1 00:13:43.799 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:43.799 lat (usec) : 250=0.16%, 500=13.07%, 750=42.10%, 1000=36.19% 00:13:43.799 lat (msec) : 2=8.48% 00:13:43.799 cpu : usr=1.70%, sys=3.90%, ctx=1285, majf=0, minf=2 00:13:43.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.799 issued rwts: total=512,773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.799 00:13:43.799 Run status group 0 (all jobs): 00:13:43.799 READ: bw=2181KiB/s (2234kB/s), 67.6KiB/s-2046KiB/s (69.2kB/s-2095kB/s), io=2260KiB (2314kB), run=1001-1036msec 00:13:43.799 WRITE: bw=8915KiB/s (9129kB/s), 1977KiB/s-3089KiB/s (2024kB/s-3163kB/s), io=9236KiB (9458kB), run=1001-1036msec 00:13:43.799 00:13:43.799 Disk stats (read/write): 00:13:43.799 nvme0n1: ios=34/512, merge=0/0, ticks=1301/247, in_queue=1548, util=84.07% 00:13:43.799 nvme0n2: ios=55/512, merge=0/0, ticks=560/245, in_queue=805, util=91.33% 00:13:43.799 nvme0n3: ios=35/512, merge=0/0, ticks=1378/294, in_queue=1672, util=92.07% 00:13:43.799 nvme0n4: ios=561/512, merge=0/0, ticks=623/294, in_queue=917, util=97.54% 00:13:43.799 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:43.799 [global] 00:13:43.799 thread=1 00:13:43.799 invalidate=1 00:13:43.799 rw=write 00:13:43.799 time_based=1 00:13:43.799 runtime=1 00:13:43.799 ioengine=libaio 00:13:43.799 direct=1 00:13:43.799 bs=4096 00:13:43.799 iodepth=128 00:13:43.799 norandommap=0 00:13:43.799 numjobs=1 00:13:43.799 00:13:43.799 verify_dump=1 00:13:43.799 verify_backlog=512 00:13:43.800 verify_state_save=0 00:13:43.800 do_verify=1 00:13:43.800 verify=crc32c-intel 00:13:43.800 [job0] 00:13:43.800 filename=/dev/nvme0n1 00:13:43.800 [job1] 00:13:43.800 filename=/dev/nvme0n2 00:13:43.800 [job2] 00:13:43.800 filename=/dev/nvme0n3 00:13:43.800 [job3] 00:13:43.800 filename=/dev/nvme0n4 00:13:43.800 Could not set queue depth (nvme0n1) 00:13:43.800 Could not set queue depth (nvme0n2) 00:13:43.800 Could not set queue depth (nvme0n3) 00:13:43.800 Could not set queue depth (nvme0n4) 00:13:44.061 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:44.061 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:44.061 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:44.061 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:44.061 fio-3.35 00:13:44.061 Starting 4 threads 00:13:45.448 00:13:45.448 job0: (groupid=0, jobs=1): err= 0: pid=2698333: Wed Nov 20 06:24:05 2024 00:13:45.448 read: IOPS=6484, BW=25.3MiB/s (26.6MB/s)(25.5MiB/1005msec) 00:13:45.448 slat (nsec): min=1012, max=12913k, avg=79928.60, stdev=606315.65 00:13:45.448 clat (usec): min=3374, max=29893, avg=10093.15, stdev=3758.59 00:13:45.448 lat (usec): min=3383, max=31040, avg=10173.08, stdev=3800.55 00:13:45.448 clat percentiles (usec): 00:13:45.448 | 1.00th=[ 4080], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7701], 00:13:45.448 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 9110], 00:13:45.448 | 70.00th=[10552], 80.00th=[12780], 90.00th=[15795], 95.00th=[17433], 00:13:45.448 | 99.00th=[23987], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:13:45.449 | 99.99th=[30016] 00:13:45.449 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:13:45.449 slat (nsec): min=1689, max=15473k, avg=66851.79, stdev=422355.60 00:13:45.449 clat (usec): min=1902, max=48598, avg=9246.96, stdev=5561.09 00:13:45.449 lat (usec): min=1909, max=48607, avg=9313.81, stdev=5603.11 00:13:45.449 clat percentiles (usec): 00:13:45.449 | 1.00th=[ 3032], 5.00th=[ 4293], 10.00th=[ 5407], 20.00th=[ 7242], 00:13:45.449 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:13:45.449 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[14484], 95.00th=[16712], 00:13:45.449 | 99.00th=[39060], 99.50th=[43779], 99.90th=[47973], 99.95th=[47973], 00:13:45.449 | 99.99th=[48497] 00:13:45.449 bw ( KiB/s): min=22000, max=31248, per=23.85%, avg=26624.00, stdev=6539.32, samples=2 00:13:45.449 iops : min= 5500, max= 7812, avg=6656.00, stdev=1634.83, samples=2 00:13:45.449 lat (msec) : 2=0.05%, 4=2.37%, 10=71.44%, 20=23.74%, 50=2.41% 00:13:45.449 cpu : usr=4.08%, sys=6.97%, ctx=804, majf=0, minf=1 00:13:45.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:45.449 issued rwts: total=6517,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:45.449 job1: (groupid=0, jobs=1): err= 0: pid=2698350: Wed Nov 20 06:24:05 2024 00:13:45.449 read: IOPS=7924, BW=31.0MiB/s (32.5MB/s)(31.1MiB/1005msec) 00:13:45.449 slat (nsec): min=956, max=7642.2k, avg=66849.59, stdev=489482.98 00:13:45.449 clat (usec): min=1755, max=16287, avg=8495.45, stdev=1988.56 00:13:45.449 lat (usec): min=3009, max=16316, avg=8562.30, stdev=2024.04 00:13:45.449 clat percentiles (usec): 00:13:45.449 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 6980], 00:13:45.449 | 30.00th=[ 7504], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8455], 00:13:45.449 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[11469], 95.00th=[12780], 00:13:45.449 | 99.00th=[14615], 99.50th=[15139], 99.90th=[15795], 99.95th=[15795], 00:13:45.449 | 99.99th=[16319] 00:13:45.449 write: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec); 0 zone resets 00:13:45.449 slat (nsec): min=1639, max=6746.2k, avg=51966.32, stdev=338153.60 00:13:45.449 clat (usec): min=1181, max=19267, avg=7307.11, stdev=1867.40 00:13:45.449 lat (usec): min=1193, max=19269, avg=7359.07, stdev=1894.34 00:13:45.449 clat percentiles (usec): 00:13:45.449 | 1.00th=[ 2573], 5.00th=[ 4015], 10.00th=[ 4883], 20.00th=[ 6063], 00:13:45.449 | 30.00th=[ 6652], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 7963], 00:13:45.449 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 9241], 00:13:45.449 | 99.00th=[13960], 99.50th=[14877], 99.90th=[16712], 99.95th=[16909], 00:13:45.449 | 99.99th=[19268] 00:13:45.449 bw ( KiB/s): min=30672, max=34864, per=29.35%, avg=32768.00, stdev=2964.19, samples=2 00:13:45.449 iops : min= 7668, max= 8716, avg=8192.00, stdev=741.05, samples=2 00:13:45.449 lat (msec) : 2=0.18%, 4=2.77%, 10=87.21%, 20=9.84% 00:13:45.449 cpu : usr=5.78%, sys=7.97%, ctx=710, majf=0, minf=2 00:13:45.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:45.449 issued rwts: total=7964,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:45.449 job2: (groupid=0, jobs=1): err= 0: pid=2698370: Wed Nov 20 06:24:05 2024 00:13:45.449 read: IOPS=6446, BW=25.2MiB/s (26.4MB/s)(25.3MiB/1003msec) 00:13:45.449 slat (nsec): min=918, max=5062.5k, avg=80072.50, stdev=508803.13 00:13:45.449 clat (usec): min=1753, max=18075, avg=9949.17, stdev=1570.42 00:13:45.449 lat (usec): min=1755, max=18081, avg=10029.24, stdev=1624.53 00:13:45.449 clat percentiles (usec): 00:13:45.449 | 1.00th=[ 6390], 5.00th=[ 7570], 10.00th=[ 8356], 20.00th=[ 8717], 00:13:45.449 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:13:45.449 | 70.00th=[10159], 80.00th=[10421], 90.00th=[11994], 95.00th=[13173], 00:13:45.449 | 99.00th=[15139], 99.50th=[15401], 99.90th=[16057], 99.95th=[16188], 00:13:45.449 | 99.99th=[17957] 00:13:45.449 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:13:45.449 slat (nsec): min=1561, max=7008.4k, avg=68467.57, stdev=330067.53 00:13:45.449 clat (usec): min=4961, max=18662, avg=9401.61, stdev=1266.92 00:13:45.449 lat (usec): min=4968, max=18694, avg=9470.07, stdev=1294.56 00:13:45.449 clat percentiles (usec): 00:13:45.449 | 1.00th=[ 5997], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[ 8455], 00:13:45.449 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:13:45.449 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11994], 00:13:45.449 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14091], 99.95th=[16319], 00:13:45.449 | 99.99th=[18744] 00:13:45.449 bw ( KiB/s): min=25672, max=27576, per=23.85%, avg=26624.00, stdev=1346.33, samples=2 00:13:45.449 iops : min= 6418, max= 6894, avg=6656.00, stdev=336.58, samples=2 00:13:45.449 lat (msec) : 2=0.06%, 10=69.81%, 20=30.13% 00:13:45.449 cpu : usr=3.99%, sys=4.49%, ctx=811, majf=0, minf=1 00:13:45.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:45.449 issued rwts: total=6466,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:45.449 job3: (groupid=0, jobs=1): err= 0: pid=2698377: Wed Nov 20 06:24:05 2024 00:13:45.449 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:13:45.449 slat (nsec): min=914, max=10800k, avg=81182.93, stdev=557666.55 00:13:45.449 clat (usec): min=4740, max=26054, avg=10476.41, stdev=2634.22 00:13:45.449 lat (usec): min=4748, max=26063, avg=10557.60, stdev=2672.28 00:13:45.449 clat percentiles (usec): 00:13:45.449 | 1.00th=[ 6456], 5.00th=[ 7242], 10.00th=[ 7963], 20.00th=[ 9241], 00:13:45.449 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:13:45.449 | 70.00th=[10421], 80.00th=[11469], 90.00th=[13698], 95.00th=[15270], 00:13:45.449 | 99.00th=[21103], 99.50th=[22938], 99.90th=[26084], 99.95th=[26084], 00:13:45.449 | 99.99th=[26084] 00:13:45.449 write: IOPS=6516, BW=25.5MiB/s (26.7MB/s)(25.6MiB/1004msec); 0 zone resets 00:13:45.449 slat (nsec): min=1583, max=8424.5k, avg=67739.44, stdev=326819.38 00:13:45.449 clat (usec): min=631, max=23491, avg=9634.16, stdev=2569.82 00:13:45.449 lat (usec): min=640, max=23494, avg=9701.90, stdev=2579.06 00:13:45.449 clat percentiles (usec): 00:13:45.449 | 1.00th=[ 2802], 5.00th=[ 5735], 10.00th=[ 7177], 20.00th=[ 8586], 00:13:45.449 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:13:45.449 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[12780], 95.00th=[14877], 00:13:45.449 | 99.00th=[17957], 99.50th=[19268], 99.90th=[20579], 99.95th=[21890], 00:13:45.449 | 99.99th=[23462] 00:13:45.449 bw ( KiB/s): min=24576, max=26752, per=22.99%, avg=25664.00, stdev=1538.66, samples=2 00:13:45.449 iops : min= 6144, max= 6688, avg=6416.00, stdev=384.67, samples=2 00:13:45.449 lat (usec) : 750=0.02% 00:13:45.449 lat (msec) : 2=0.26%, 4=0.65%, 10=66.33%, 20=31.81%, 50=0.92% 00:13:45.449 cpu : usr=4.09%, sys=5.18%, ctx=806, majf=0, minf=1 00:13:45.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:45.449 issued rwts: total=6144,6543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:45.449 00:13:45.449 Run status group 0 (all jobs): 00:13:45.449 READ: bw=105MiB/s (110MB/s), 23.9MiB/s-31.0MiB/s (25.1MB/s-32.5MB/s), io=106MiB (111MB), run=1003-1005msec 00:13:45.449 WRITE: bw=109MiB/s (114MB/s), 25.5MiB/s-31.8MiB/s (26.7MB/s-33.4MB/s), io=110MiB (115MB), run=1003-1005msec 00:13:45.449 00:13:45.449 Disk stats (read/write): 00:13:45.449 nvme0n1: ios=5171/5425, merge=0/0, ticks=51339/50772, in_queue=102111, util=97.49% 00:13:45.449 nvme0n2: ios=6692/6943, merge=0/0, ticks=52867/46830, in_queue=99697, util=97.86% 00:13:45.449 nvme0n3: ios=5308/5632, merge=0/0, ticks=26339/24338, in_queue=50677, util=88.40% 00:13:45.449 nvme0n4: ios=5120/5335, merge=0/0, ticks=35371/32958, in_queue=68329, util=89.53% 00:13:45.449 06:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:45.449 [global] 00:13:45.449 thread=1 00:13:45.449 invalidate=1 00:13:45.449 rw=randwrite 00:13:45.449 time_based=1 00:13:45.449 runtime=1 00:13:45.449 ioengine=libaio 00:13:45.449 direct=1 00:13:45.449 bs=4096 00:13:45.449 iodepth=128 00:13:45.449 norandommap=0 00:13:45.449 numjobs=1 00:13:45.449 00:13:45.449 verify_dump=1 00:13:45.449 verify_backlog=512 00:13:45.449 verify_state_save=0 00:13:45.449 do_verify=1 00:13:45.449 verify=crc32c-intel 00:13:45.449 [job0] 00:13:45.449 filename=/dev/nvme0n1 00:13:45.449 [job1] 00:13:45.449 filename=/dev/nvme0n2 00:13:45.449 [job2] 00:13:45.449 filename=/dev/nvme0n3 00:13:45.449 [job3] 00:13:45.449 filename=/dev/nvme0n4 00:13:45.449 Could not set queue depth (nvme0n1) 00:13:45.449 Could not set queue depth (nvme0n2) 00:13:45.449 Could not set queue depth (nvme0n3) 00:13:45.449 Could not set queue depth (nvme0n4) 00:13:45.710 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.710 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.710 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.710 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.710 fio-3.35 00:13:45.710 Starting 4 threads 00:13:47.094 00:13:47.094 job0: (groupid=0, jobs=1): err= 0: pid=2698845: Wed Nov 20 06:24:07 2024 00:13:47.094 read: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1005msec) 00:13:47.094 slat (nsec): min=990, max=13220k, avg=92338.99, stdev=679220.14 00:13:47.094 clat (usec): min=3044, max=34249, avg=10895.38, stdev=4580.86 00:13:47.094 lat (usec): min=3570, max=34277, avg=10987.72, stdev=4634.52 00:13:47.094 clat percentiles (usec): 00:13:47.094 | 1.00th=[ 4686], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 6915], 00:13:47.094 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[11207], 00:13:47.094 | 70.00th=[12387], 80.00th=[14484], 90.00th=[16909], 95.00th=[21365], 00:13:47.094 | 99.00th=[23462], 99.50th=[23725], 99.90th=[24511], 99.95th=[31589], 00:13:47.094 | 99.99th=[34341] 00:13:47.094 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:13:47.094 slat (nsec): min=1667, max=52241k, avg=183964.23, stdev=1225829.72 00:13:47.094 clat (usec): min=1191, max=92689, avg=25067.97, stdev=19895.83 00:13:47.094 lat (usec): min=1202, max=92692, avg=25251.93, stdev=20002.38 00:13:47.094 clat percentiles (usec): 00:13:47.094 | 1.00th=[ 2704], 5.00th=[ 5473], 10.00th=[ 8848], 20.00th=[12125], 00:13:47.094 | 30.00th=[13173], 40.00th=[14615], 50.00th=[15533], 60.00th=[18482], 00:13:47.094 | 70.00th=[26346], 80.00th=[39584], 90.00th=[60556], 95.00th=[69731], 00:13:47.094 | 99.00th=[84411], 99.50th=[87557], 99.90th=[92799], 99.95th=[92799], 00:13:47.094 | 99.99th=[92799] 00:13:47.094 bw ( KiB/s): min=11920, max=16752, per=14.95%, avg=14336.00, stdev=3416.74, samples=2 00:13:47.094 iops : min= 2980, max= 4188, avg=3584.00, stdev=854.18, samples=2 00:13:47.094 lat (msec) : 2=0.14%, 4=1.68%, 10=30.72%, 20=44.95%, 50=14.49% 00:13:47.094 lat (msec) : 100=8.02% 00:13:47.094 cpu : usr=3.29%, sys=2.89%, ctx=466, majf=0, minf=1 00:13:47.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:47.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:47.094 issued rwts: total=3450,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:47.094 job1: (groupid=0, jobs=1): err= 0: pid=2698847: Wed Nov 20 06:24:07 2024 00:13:47.094 read: IOPS=7680, BW=30.0MiB/s (31.5MB/s)(30.2MiB/1007msec) 00:13:47.094 slat (nsec): min=1009, max=11694k, avg=63961.95, stdev=486296.98 00:13:47.094 clat (usec): min=3064, max=23733, avg=8545.53, stdev=2706.75 00:13:47.094 lat (usec): min=3073, max=23747, avg=8609.49, stdev=2737.75 00:13:47.094 clat percentiles (usec): 00:13:47.094 | 1.00th=[ 5014], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 6587], 00:13:47.094 | 30.00th=[ 6915], 40.00th=[ 7177], 50.00th=[ 7832], 60.00th=[ 8356], 00:13:47.094 | 70.00th=[ 8979], 80.00th=[10290], 90.00th=[12256], 95.00th=[13566], 00:13:47.094 | 99.00th=[18482], 99.50th=[20055], 99.90th=[20317], 99.95th=[20317], 00:13:47.094 | 99.99th=[23725] 00:13:47.094 write: IOPS=8135, BW=31.8MiB/s (33.3MB/s)(32.0MiB/1007msec); 0 zone resets 00:13:47.094 slat (nsec): min=1618, max=9287.3k, avg=56548.62, stdev=372445.44 00:13:47.094 clat (usec): min=1281, max=25159, avg=7525.70, stdev=3187.27 00:13:47.094 lat (usec): min=1292, max=25168, avg=7582.25, stdev=3209.10 00:13:47.094 clat percentiles (usec): 00:13:47.094 | 1.00th=[ 2769], 5.00th=[ 3818], 10.00th=[ 4146], 20.00th=[ 5669], 00:13:47.094 | 30.00th=[ 6194], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7242], 00:13:47.094 | 70.00th=[ 7373], 80.00th=[ 8586], 90.00th=[11600], 95.00th=[14091], 00:13:47.094 | 99.00th=[20317], 99.50th=[20579], 99.90th=[25035], 99.95th=[25035], 00:13:47.094 | 99.99th=[25035] 00:13:47.094 bw ( KiB/s): min=32200, max=32752, per=33.86%, avg=32476.00, stdev=390.32, samples=2 00:13:47.094 iops : min= 8050, max= 8188, avg=8119.00, stdev=97.58, samples=2 00:13:47.094 lat (msec) : 2=0.05%, 4=3.47%, 10=79.02%, 20=16.44%, 50=1.02% 00:13:47.094 cpu : usr=4.67%, sys=9.54%, ctx=685, majf=0, minf=1 00:13:47.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:47.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:47.094 issued rwts: total=7734,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:47.094 job2: (groupid=0, jobs=1): err= 0: pid=2698853: Wed Nov 20 06:24:07 2024 00:13:47.094 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:13:47.094 slat (nsec): min=1010, max=14545k, avg=67660.05, stdev=576579.79 00:13:47.094 clat (usec): min=1460, max=31025, avg=9505.12, stdev=3940.30 00:13:47.094 lat (usec): min=1474, max=31071, avg=9572.78, stdev=3986.51 00:13:47.094 clat percentiles (usec): 00:13:47.094 | 1.00th=[ 2573], 5.00th=[ 3785], 10.00th=[ 5866], 20.00th=[ 7177], 00:13:47.094 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 9110], 00:13:47.094 | 70.00th=[10552], 80.00th=[12125], 90.00th=[14615], 95.00th=[19530], 00:13:47.094 | 99.00th=[22414], 99.50th=[22414], 99.90th=[26084], 99.95th=[26084], 00:13:47.094 | 99.99th=[31065] 00:13:47.094 write: IOPS=6347, BW=24.8MiB/s (26.0MB/s)(25.0MiB/1008msec); 0 zone resets 00:13:47.094 slat (nsec): min=1593, max=13429k, avg=74955.86, stdev=525930.98 00:13:47.095 clat (usec): min=446, max=72550, avg=11623.74, stdev=13846.31 00:13:47.095 lat (usec): min=462, max=72558, avg=11698.70, stdev=13935.01 00:13:47.095 clat percentiles (usec): 00:13:47.095 | 1.00th=[ 1237], 5.00th=[ 2474], 10.00th=[ 3884], 20.00th=[ 4948], 00:13:47.095 | 30.00th=[ 5866], 40.00th=[ 6849], 50.00th=[ 7570], 60.00th=[ 8094], 00:13:47.095 | 70.00th=[ 8586], 80.00th=[11076], 90.00th=[18744], 95.00th=[53216], 00:13:47.095 | 99.00th=[67634], 99.50th=[69731], 99.90th=[71828], 99.95th=[72877], 00:13:47.095 | 99.99th=[72877] 00:13:47.095 bw ( KiB/s): min=17008, max=33160, per=26.15%, avg=25084.00, stdev=11421.19, samples=2 00:13:47.095 iops : min= 4252, max= 8290, avg=6271.00, stdev=2855.30, samples=2 00:13:47.095 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.10% 00:13:47.095 lat (msec) : 2=1.46%, 4=6.71%, 10=62.11%, 20=22.83%, 50=3.51% 00:13:47.095 lat (msec) : 100=3.23% 00:13:47.095 cpu : usr=5.56%, sys=6.65%, ctx=456, majf=0, minf=1 00:13:47.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:47.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:47.095 issued rwts: total=5632,6398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.095 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:47.095 job3: (groupid=0, jobs=1): err= 0: pid=2698858: Wed Nov 20 06:24:07 2024 00:13:47.095 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:13:47.095 slat (nsec): min=1027, max=9403.9k, avg=85325.79, stdev=576660.85 00:13:47.095 clat (usec): min=3642, max=42724, avg=10165.84, stdev=4091.33 00:13:47.095 lat (usec): min=3649, max=42726, avg=10251.16, stdev=4149.13 00:13:47.095 clat percentiles (usec): 00:13:47.095 | 1.00th=[ 4883], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7701], 00:13:47.095 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:13:47.095 | 70.00th=[10421], 80.00th=[11994], 90.00th=[14484], 95.00th=[16450], 00:13:47.095 | 99.00th=[28181], 99.50th=[34341], 99.90th=[41681], 99.95th=[42730], 00:13:47.095 | 99.99th=[42730] 00:13:47.095 write: IOPS=5986, BW=23.4MiB/s (24.5MB/s)(23.6MiB/1010msec); 0 zone resets 00:13:47.095 slat (nsec): min=1674, max=9821.5k, avg=77290.68, stdev=448595.31 00:13:47.095 clat (usec): min=2615, max=42717, avg=11666.11, stdev=7817.79 00:13:47.095 lat (usec): min=2623, max=42719, avg=11743.40, stdev=7856.23 00:13:47.095 clat percentiles (usec): 00:13:47.095 | 1.00th=[ 3589], 5.00th=[ 4555], 10.00th=[ 5145], 20.00th=[ 6063], 00:13:47.095 | 30.00th=[ 7046], 40.00th=[ 8029], 50.00th=[ 9241], 60.00th=[11207], 00:13:47.095 | 70.00th=[12256], 80.00th=[14877], 90.00th=[21627], 95.00th=[31327], 00:13:47.095 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:13:47.095 | 99.99th=[42730] 00:13:47.095 bw ( KiB/s): min=22784, max=24560, per=24.68%, avg=23672.00, stdev=1255.82, samples=2 00:13:47.095 iops : min= 5696, max= 6140, avg=5918.00, stdev=313.96, samples=2 00:13:47.095 lat (msec) : 4=1.04%, 10=60.25%, 20=31.16%, 50=7.54% 00:13:47.095 cpu : usr=3.96%, sys=6.84%, ctx=473, majf=0, minf=1 00:13:47.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:47.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:47.095 issued rwts: total=5632,6046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.095 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:47.095 00:13:47.095 Run status group 0 (all jobs): 00:13:47.095 READ: bw=86.8MiB/s (91.0MB/s), 13.4MiB/s-30.0MiB/s (14.1MB/s-31.5MB/s), io=87.7MiB (91.9MB), run=1005-1010msec 00:13:47.095 WRITE: bw=93.7MiB/s (98.2MB/s), 13.9MiB/s-31.8MiB/s (14.6MB/s-33.3MB/s), io=94.6MiB (99.2MB), run=1005-1010msec 00:13:47.095 00:13:47.095 Disk stats (read/write): 00:13:47.095 nvme0n1: ios=2584/2879, merge=0/0, ticks=28109/67375, in_queue=95484, util=82.87% 00:13:47.095 nvme0n2: ios=6310/6656, merge=0/0, ticks=51350/47927, in_queue=99277, util=91.23% 00:13:47.095 nvme0n3: ios=4150/5120, merge=0/0, ticks=37702/61914, in_queue=99616, util=95.46% 00:13:47.095 nvme0n4: ios=5181/5183, merge=0/0, ticks=46327/44862, in_queue=91189, util=94.66% 00:13:47.095 06:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:47.095 06:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2699164 00:13:47.095 06:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:47.095 06:24:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:47.095 [global] 00:13:47.095 thread=1 00:13:47.095 invalidate=1 00:13:47.095 rw=read 00:13:47.095 time_based=1 00:13:47.095 runtime=10 00:13:47.095 ioengine=libaio 00:13:47.095 direct=1 00:13:47.095 bs=4096 00:13:47.095 iodepth=1 00:13:47.095 norandommap=1 00:13:47.095 numjobs=1 00:13:47.095 00:13:47.095 [job0] 00:13:47.095 filename=/dev/nvme0n1 00:13:47.095 [job1] 00:13:47.095 filename=/dev/nvme0n2 00:13:47.095 [job2] 00:13:47.095 filename=/dev/nvme0n3 00:13:47.095 [job3] 00:13:47.095 filename=/dev/nvme0n4 00:13:47.095 Could not set queue depth (nvme0n1) 00:13:47.095 Could not set queue depth (nvme0n2) 00:13:47.095 Could not set queue depth (nvme0n3) 00:13:47.095 Could not set queue depth (nvme0n4) 00:13:47.357 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.357 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.357 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.357 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.357 fio-3.35 00:13:47.357 Starting 4 threads 00:13:49.902 06:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:50.162 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=757760, buflen=4096 00:13:50.162 fio: pid=2699358, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:50.162 06:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:50.423 06:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:50.423 06:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:50.423 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=278528, buflen=4096 00:13:50.423 fio: pid=2699354, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:50.423 06:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:50.423 06:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:50.684 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2818048, buflen=4096 00:13:50.685 fio: pid=2699348, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:50.685 06:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:50.685 06:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:50.685 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10698752, buflen=4096 00:13:50.685 fio: pid=2699349, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:50.685 00:13:50.685 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2699348: Wed Nov 20 06:24:10 2024 00:13:50.685 read: IOPS=229, BW=917KiB/s (939kB/s)(2752KiB/3001msec) 00:13:50.685 slat (usec): min=6, max=27851, avg=95.52, stdev=1196.19 00:13:50.685 clat (usec): min=673, max=42129, avg=4220.90, stdev=10921.70 00:13:50.685 lat (usec): min=690, max=42154, avg=4316.53, stdev=10967.33 00:13:50.685 clat percentiles (usec): 00:13:50.685 | 1.00th=[ 758], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 955], 00:13:50.685 | 30.00th=[ 988], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[ 1074], 00:13:50.685 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1237], 95.00th=[41157], 00:13:50.685 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:50.685 | 99.99th=[42206] 00:13:50.685 bw ( KiB/s): min= 88, max= 2736, per=20.49%, avg=920.00, stdev=1203.01, samples=5 00:13:50.685 iops : min= 22, max= 684, avg=230.00, stdev=300.75, samples=5 00:13:50.685 lat (usec) : 750=0.87%, 1000=34.11% 00:13:50.685 lat (msec) : 2=57.04%, 50=7.84% 00:13:50.685 cpu : usr=0.23%, sys=0.70%, ctx=694, majf=0, minf=1 00:13:50.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.685 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.685 issued rwts: total=689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.685 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2699349: Wed Nov 20 06:24:10 2024 00:13:50.685 read: IOPS=825, BW=3300KiB/s (3379kB/s)(10.2MiB/3166msec) 00:13:50.685 slat (usec): min=6, max=12578, avg=29.92, stdev=245.64 00:13:50.685 clat (usec): min=522, max=41954, avg=1167.60, stdev=2867.31 00:13:50.685 lat (usec): min=530, max=53911, avg=1197.52, stdev=2946.21 00:13:50.685 clat percentiles (usec): 00:13:50.685 | 1.00th=[ 701], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 930], 00:13:50.685 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:13:50.685 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1057], 00:13:50.685 | 99.00th=[ 1156], 99.50th=[12649], 99.90th=[41681], 99.95th=[42206], 00:13:50.685 | 99.99th=[42206] 00:13:50.685 bw ( KiB/s): min= 602, max= 4096, per=77.15%, avg=3463.00, stdev=1402.05, samples=6 00:13:50.685 iops : min= 150, max= 1024, avg=865.67, stdev=350.72, samples=6 00:13:50.685 lat (usec) : 750=2.03%, 1000=74.01% 00:13:50.685 lat (msec) : 2=23.38%, 20=0.04%, 50=0.50% 00:13:50.685 cpu : usr=1.33%, sys=2.02%, ctx=2618, majf=0, minf=2 00:13:50.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.685 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.685 issued rwts: total=2613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.685 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2699354: Wed Nov 20 06:24:10 2024 00:13:50.685 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(272KiB/2811msec) 00:13:50.685 slat (nsec): min=9779, max=57304, avg=25818.45, stdev=6724.89 00:13:50.685 clat (usec): min=985, max=42066, avg=40979.77, stdev=4945.99 00:13:50.685 lat (usec): min=1023, max=42092, avg=41005.48, stdev=4944.68 00:13:50.685 clat percentiles (usec): 00:13:50.685 | 1.00th=[ 988], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:50.685 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:13:50.685 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:50.685 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:50.685 | 99.99th=[42206] 00:13:50.685 bw ( KiB/s): min= 96, max= 104, per=2.16%, avg=97.60, stdev= 3.58, samples=5 00:13:50.685 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:13:50.685 lat (usec) : 1000=1.45% 00:13:50.685 lat (msec) : 50=97.10% 00:13:50.685 cpu : usr=0.14%, sys=0.00%, ctx=70, majf=0, minf=2 00:13:50.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.685 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.685 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.685 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2699358: Wed Nov 20 06:24:10 2024 00:13:50.685 read: IOPS=71, BW=286KiB/s (293kB/s)(740KiB/2586msec) 00:13:50.685 slat (nsec): min=6682, max=44142, avg=25685.01, stdev=4161.81 00:13:50.685 clat (usec): min=743, max=42149, avg=13894.88, stdev=18969.08 00:13:50.685 lat (usec): min=770, max=42181, avg=13920.57, stdev=18969.33 00:13:50.685 clat percentiles (usec): 00:13:50.685 | 1.00th=[ 783], 5.00th=[ 848], 10.00th=[ 906], 20.00th=[ 947], 00:13:50.685 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1029], 60.00th=[ 1057], 00:13:50.685 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:13:50.685 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:50.685 | 99.99th=[42206] 00:13:50.685 bw ( KiB/s): min= 96, max= 456, per=4.01%, avg=180.80, stdev=154.98, samples=5 00:13:50.685 iops : min= 24, max= 114, avg=45.20, stdev=38.75, samples=5 00:13:50.685 lat (usec) : 750=0.54%, 1000=40.32% 00:13:50.685 lat (msec) : 2=26.88%, 50=31.72% 00:13:50.685 cpu : usr=0.23%, sys=0.08%, ctx=186, majf=0, minf=2 00:13:50.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.685 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.685 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.685 00:13:50.685 Run status group 0 (all jobs): 00:13:50.685 READ: bw=4489KiB/s (4597kB/s), 96.8KiB/s-3300KiB/s (99.1kB/s-3379kB/s), io=13.9MiB (14.6MB), run=2586-3166msec 00:13:50.685 00:13:50.685 Disk stats (read/write): 00:13:50.685 nvme0n1: ios=661/0, merge=0/0, ticks=2744/0, in_queue=2744, util=93.22% 00:13:50.685 nvme0n2: ios=2610/0, merge=0/0, ticks=2999/0, in_queue=2999, util=95.32% 00:13:50.685 nvme0n3: ios=63/0, merge=0/0, ticks=2581/0, in_queue=2581, util=96.03% 00:13:50.685 nvme0n4: ios=103/0, merge=0/0, ticks=2326/0, in_queue=2326, util=96.06% 00:13:50.946 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:50.946 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:51.206 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:51.206 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:51.206 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:51.206 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:51.473 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:51.473 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2699164 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:51.793 nvmf hotplug test: fio failed as expected 00:13:51.793 06:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.080 rmmod nvme_tcp 00:13:52.080 rmmod nvme_fabrics 00:13:52.080 rmmod nvme_keyring 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2695533 ']' 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2695533 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2695533 ']' 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2695533 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2695533 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2695533' 00:13:52.080 killing process with pid 2695533 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2695533 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2695533 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.080 06:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:54.676 00:13:54.676 real 0m29.275s 00:13:54.676 user 2m33.084s 00:13:54.676 sys 0m9.337s 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.676 ************************************ 00:13:54.676 END TEST nvmf_fio_target 00:13:54.676 ************************************ 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:54.676 ************************************ 00:13:54.676 START TEST nvmf_bdevio 00:13:54.676 ************************************ 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:54.676 * Looking for test storage... 00:13:54.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:54.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.676 --rc genhtml_branch_coverage=1 00:13:54.676 --rc genhtml_function_coverage=1 00:13:54.676 --rc genhtml_legend=1 00:13:54.676 --rc geninfo_all_blocks=1 00:13:54.676 --rc geninfo_unexecuted_blocks=1 00:13:54.676 00:13:54.676 ' 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:54.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.676 --rc genhtml_branch_coverage=1 00:13:54.676 --rc genhtml_function_coverage=1 00:13:54.676 --rc genhtml_legend=1 00:13:54.676 --rc geninfo_all_blocks=1 00:13:54.676 --rc geninfo_unexecuted_blocks=1 00:13:54.676 00:13:54.676 ' 00:13:54.676 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:54.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.676 --rc genhtml_branch_coverage=1 00:13:54.676 --rc genhtml_function_coverage=1 00:13:54.676 --rc genhtml_legend=1 00:13:54.676 --rc geninfo_all_blocks=1 00:13:54.677 --rc geninfo_unexecuted_blocks=1 00:13:54.677 00:13:54.677 ' 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:54.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.677 --rc genhtml_branch_coverage=1 00:13:54.677 --rc genhtml_function_coverage=1 00:13:54.677 --rc genhtml_legend=1 00:13:54.677 --rc geninfo_all_blocks=1 00:13:54.677 --rc geninfo_unexecuted_blocks=1 00:13:54.677 00:13:54.677 ' 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.677 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.821 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.822 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.822 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.822 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.822 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.822 06:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:14:02.822 00:14:02.822 --- 10.0.0.2 ping statistics --- 00:14:02.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.822 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:14:02.822 00:14:02.822 --- 10.0.0.1 ping statistics --- 00:14:02.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.822 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2705134 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2705134 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2705134 ']' 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:02.822 06:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.822 [2024-11-20 06:24:22.303264] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:14:02.822 [2024-11-20 06:24:22.303331] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.822 [2024-11-20 06:24:22.405233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.822 [2024-11-20 06:24:22.457400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.822 [2024-11-20 06:24:22.457456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.822 [2024-11-20 06:24:22.457464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.823 [2024-11-20 06:24:22.457473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.823 [2024-11-20 06:24:22.457479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.823 [2024-11-20 06:24:22.459531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:02.823 [2024-11-20 06:24:22.459695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:02.823 [2024-11-20 06:24:22.459857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.823 [2024-11-20 06:24:22.459857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.084 [2024-11-20 06:24:23.184654] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.084 Malloc0 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.084 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.085 [2024-11-20 06:24:23.258824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:03.085 { 00:14:03.085 "params": { 00:14:03.085 "name": "Nvme$subsystem", 00:14:03.085 "trtype": "$TEST_TRANSPORT", 00:14:03.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:03.085 "adrfam": "ipv4", 00:14:03.085 "trsvcid": "$NVMF_PORT", 00:14:03.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:03.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:03.085 "hdgst": ${hdgst:-false}, 00:14:03.085 "ddgst": ${ddgst:-false} 00:14:03.085 }, 00:14:03.085 "method": "bdev_nvme_attach_controller" 00:14:03.085 } 00:14:03.085 EOF 00:14:03.085 )") 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:03.085 06:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:03.085 "params": { 00:14:03.085 "name": "Nvme1", 00:14:03.085 "trtype": "tcp", 00:14:03.085 "traddr": "10.0.0.2", 00:14:03.085 "adrfam": "ipv4", 00:14:03.085 "trsvcid": "4420", 00:14:03.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:03.085 "hdgst": false, 00:14:03.085 "ddgst": false 00:14:03.085 }, 00:14:03.085 "method": "bdev_nvme_attach_controller" 00:14:03.085 }' 00:14:03.085 [2024-11-20 06:24:23.318179] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:14:03.085 [2024-11-20 06:24:23.318250] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2705207 ] 00:14:03.345 [2024-11-20 06:24:23.412196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:03.345 [2024-11-20 06:24:23.469215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.345 [2024-11-20 06:24:23.469330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.345 [2024-11-20 06:24:23.469330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.606 I/O targets: 00:14:03.606 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:03.606 00:14:03.606 00:14:03.606 CUnit - A unit testing framework for C - Version 2.1-3 00:14:03.606 http://cunit.sourceforge.net/ 00:14:03.606 00:14:03.606 00:14:03.606 Suite: bdevio tests on: Nvme1n1 00:14:03.606 Test: blockdev write read block ...passed 00:14:03.606 Test: blockdev write zeroes read block ...passed 00:14:03.606 Test: blockdev write zeroes read no split ...passed 00:14:03.606 Test: blockdev write zeroes read split ...passed 00:14:03.606 Test: blockdev write zeroes read split partial ...passed 00:14:03.606 Test: blockdev reset ...[2024-11-20 06:24:23.851156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:03.606 [2024-11-20 06:24:23.851258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b66970 (9): Bad file descriptor 00:14:03.606 [2024-11-20 06:24:23.869137] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:03.606 passed 00:14:03.606 Test: blockdev write read 8 blocks ...passed 00:14:03.606 Test: blockdev write read size > 128k ...passed 00:14:03.606 Test: blockdev write read invalid size ...passed 00:14:03.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:03.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:03.867 Test: blockdev write read max offset ...passed 00:14:03.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:03.867 Test: blockdev writev readv 8 blocks ...passed 00:14:03.867 Test: blockdev writev readv 30 x 1block ...passed 00:14:03.867 Test: blockdev writev readv block ...passed 00:14:03.867 Test: blockdev writev readv size > 128k ...passed 00:14:03.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:03.867 Test: blockdev comparev and writev ...[2024-11-20 06:24:24.095420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.867 [2024-11-20 06:24:24.095474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:03.867 [2024-11-20 06:24:24.095492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.867 [2024-11-20 06:24:24.095501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:03.867 [2024-11-20 06:24:24.096033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.867 [2024-11-20 06:24:24.096045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:03.867 [2024-11-20 06:24:24.096060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.867 [2024-11-20 06:24:24.096068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:03.867 [2024-11-20 06:24:24.096649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.867 [2024-11-20 06:24:24.096661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:03.867 [2024-11-20 06:24:24.096682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.867 [2024-11-20 06:24:24.096691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:03.867 [2024-11-20 06:24:24.097220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.867 [2024-11-20 06:24:24.097232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:03.867 [2024-11-20 06:24:24.097246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.867 [2024-11-20 06:24:24.097254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:03.867 passed 00:14:04.129 Test: blockdev nvme passthru rw ...passed 00:14:04.129 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:24:24.182013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.129 [2024-11-20 06:24:24.182030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:04.129 [2024-11-20 06:24:24.182368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.129 [2024-11-20 06:24:24.182380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:04.129 [2024-11-20 06:24:24.182641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.129 [2024-11-20 06:24:24.182654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:04.129 [2024-11-20 06:24:24.182910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.129 [2024-11-20 06:24:24.182920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:04.129 passed 00:14:04.129 Test: blockdev nvme admin passthru ...passed 00:14:04.129 Test: blockdev copy ...passed 00:14:04.129 00:14:04.129 Run Summary: Type Total Ran Passed Failed Inactive 00:14:04.129 suites 1 1 n/a 0 0 00:14:04.129 tests 23 23 23 0 0 00:14:04.129 asserts 152 152 152 0 n/a 00:14:04.129 00:14:04.129 Elapsed time = 1.215 seconds 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:04.129 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:04.129 rmmod nvme_tcp 00:14:04.389 rmmod nvme_fabrics 00:14:04.389 rmmod nvme_keyring 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2705134 ']' 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2705134 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2705134 ']' 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2705134 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2705134 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2705134' 00:14:04.389 killing process with pid 2705134 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2705134 00:14:04.389 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2705134 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.650 06:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.562 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:06.562 00:14:06.562 real 0m12.263s 00:14:06.562 user 0m13.173s 00:14:06.562 sys 0m6.305s 00:14:06.562 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:06.562 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:06.562 ************************************ 00:14:06.562 END TEST nvmf_bdevio 00:14:06.562 ************************************ 00:14:06.562 06:24:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:06.562 00:14:06.562 real 5m4.656s 00:14:06.562 user 11m41.447s 00:14:06.562 sys 1m50.868s 00:14:06.562 06:24:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:06.562 06:24:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:06.562 ************************************ 00:14:06.562 END TEST nvmf_target_core 00:14:06.562 ************************************ 00:14:06.824 06:24:26 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:06.824 06:24:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:06.824 06:24:26 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:06.824 06:24:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:06.824 ************************************ 00:14:06.824 START TEST nvmf_target_extra 00:14:06.824 ************************************ 00:14:06.824 06:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:06.824 * Looking for test storage... 00:14:06.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:14:06.824 06:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:06.824 06:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:14:06.824 06:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:06.824 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:06.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.825 --rc genhtml_branch_coverage=1 00:14:06.825 --rc genhtml_function_coverage=1 00:14:06.825 --rc genhtml_legend=1 00:14:06.825 --rc geninfo_all_blocks=1 00:14:06.825 --rc geninfo_unexecuted_blocks=1 00:14:06.825 00:14:06.825 ' 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:06.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.825 --rc genhtml_branch_coverage=1 00:14:06.825 --rc genhtml_function_coverage=1 00:14:06.825 --rc genhtml_legend=1 00:14:06.825 --rc geninfo_all_blocks=1 00:14:06.825 --rc geninfo_unexecuted_blocks=1 00:14:06.825 00:14:06.825 ' 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:06.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.825 --rc genhtml_branch_coverage=1 00:14:06.825 --rc genhtml_function_coverage=1 00:14:06.825 --rc genhtml_legend=1 00:14:06.825 --rc geninfo_all_blocks=1 00:14:06.825 --rc geninfo_unexecuted_blocks=1 00:14:06.825 00:14:06.825 ' 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:06.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.825 --rc genhtml_branch_coverage=1 00:14:06.825 --rc genhtml_function_coverage=1 00:14:06.825 --rc genhtml_legend=1 00:14:06.825 --rc geninfo_all_blocks=1 00:14:06.825 --rc geninfo_unexecuted_blocks=1 00:14:06.825 00:14:06.825 ' 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.825 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.086 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.086 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.086 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.086 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.086 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.086 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:07.086 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:07.086 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:07.087 ************************************ 00:14:07.087 START TEST nvmf_example 00:14:07.087 ************************************ 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:07.087 * Looking for test storage... 00:14:07.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.087 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:07.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.349 --rc genhtml_branch_coverage=1 00:14:07.349 --rc genhtml_function_coverage=1 00:14:07.349 --rc genhtml_legend=1 00:14:07.349 --rc geninfo_all_blocks=1 00:14:07.349 --rc geninfo_unexecuted_blocks=1 00:14:07.349 00:14:07.349 ' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:07.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.349 --rc genhtml_branch_coverage=1 00:14:07.349 --rc genhtml_function_coverage=1 00:14:07.349 --rc genhtml_legend=1 00:14:07.349 --rc geninfo_all_blocks=1 00:14:07.349 --rc geninfo_unexecuted_blocks=1 00:14:07.349 00:14:07.349 ' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:07.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.349 --rc genhtml_branch_coverage=1 00:14:07.349 --rc genhtml_function_coverage=1 00:14:07.349 --rc genhtml_legend=1 00:14:07.349 --rc geninfo_all_blocks=1 00:14:07.349 --rc geninfo_unexecuted_blocks=1 00:14:07.349 00:14:07.349 ' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:07.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.349 --rc genhtml_branch_coverage=1 00:14:07.349 --rc genhtml_function_coverage=1 00:14:07.349 --rc genhtml_legend=1 00:14:07.349 --rc geninfo_all_blocks=1 00:14:07.349 --rc geninfo_unexecuted_blocks=1 00:14:07.349 00:14:07.349 ' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:07.349 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:14:07.350 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:15.494 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:15.495 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:15.495 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:15.495 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:15.495 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:15.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:14:15.495 00:14:15.495 --- 10.0.0.2 ping statistics --- 00:14:15.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.495 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:14:15.495 00:14:15.495 --- 10.0.0.1 ping statistics --- 00:14:15.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.495 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:14:15.495 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2709926 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2709926 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 2709926 ']' 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:15.496 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:15.758 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:27.999 Initializing NVMe Controllers 00:14:27.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:27.999 Initialization complete. Launching workers. 00:14:27.999 ======================================================== 00:14:27.999 Latency(us) 00:14:27.999 Device Information : IOPS MiB/s Average min max 00:14:27.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18553.50 72.47 3449.30 631.95 19227.43 00:14:27.999 ======================================================== 00:14:27.999 Total : 18553.50 72.47 3449.30 631.95 19227.43 00:14:27.999 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:27.999 rmmod nvme_tcp 00:14:27.999 rmmod nvme_fabrics 00:14:27.999 rmmod nvme_keyring 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2709926 ']' 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2709926 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 2709926 ']' 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 2709926 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2709926 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:14:27.999 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2709926' 00:14:27.999 killing process with pid 2709926 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 2709926 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 2709926 00:14:28.000 nvmf threads initialize successfully 00:14:28.000 bdev subsystem init successfully 00:14:28.000 created a nvmf target service 00:14:28.000 create targets's poll groups done 00:14:28.000 all subsystems of target started 00:14:28.000 nvmf target is running 00:14:28.000 all subsystems of target stopped 00:14:28.000 destroy targets's poll groups done 00:14:28.000 destroyed the nvmf target service 00:14:28.000 bdev subsystem finish successfully 00:14:28.000 nvmf threads destroy successfully 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.000 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:28.571 00:14:28.571 real 0m21.437s 00:14:28.571 user 0m46.639s 00:14:28.571 sys 0m6.996s 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:28.571 ************************************ 00:14:28.571 END TEST nvmf_example 00:14:28.571 ************************************ 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.571 ************************************ 00:14:28.571 START TEST nvmf_filesystem 00:14:28.571 ************************************ 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:28.571 * Looking for test storage... 00:14:28.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:14:28.571 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:28.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.836 --rc genhtml_branch_coverage=1 00:14:28.836 --rc genhtml_function_coverage=1 00:14:28.836 --rc genhtml_legend=1 00:14:28.836 --rc geninfo_all_blocks=1 00:14:28.836 --rc geninfo_unexecuted_blocks=1 00:14:28.836 00:14:28.836 ' 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:28.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.836 --rc genhtml_branch_coverage=1 00:14:28.836 --rc genhtml_function_coverage=1 00:14:28.836 --rc genhtml_legend=1 00:14:28.836 --rc geninfo_all_blocks=1 00:14:28.836 --rc geninfo_unexecuted_blocks=1 00:14:28.836 00:14:28.836 ' 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:28.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.836 --rc genhtml_branch_coverage=1 00:14:28.836 --rc genhtml_function_coverage=1 00:14:28.836 --rc genhtml_legend=1 00:14:28.836 --rc geninfo_all_blocks=1 00:14:28.836 --rc geninfo_unexecuted_blocks=1 00:14:28.836 00:14:28.836 ' 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:28.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.836 --rc genhtml_branch_coverage=1 00:14:28.836 --rc genhtml_function_coverage=1 00:14:28.836 --rc genhtml_legend=1 00:14:28.836 --rc geninfo_all_blocks=1 00:14:28.836 --rc geninfo_unexecuted_blocks=1 00:14:28.836 00:14:28.836 ' 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:28.836 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:28.837 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:28.837 #define SPDK_CONFIG_H 00:14:28.837 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:28.837 #define SPDK_CONFIG_APPS 1 00:14:28.837 #define SPDK_CONFIG_ARCH native 00:14:28.837 #undef SPDK_CONFIG_ASAN 00:14:28.837 #undef SPDK_CONFIG_AVAHI 00:14:28.837 #undef SPDK_CONFIG_CET 00:14:28.837 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:28.837 #define SPDK_CONFIG_COVERAGE 1 00:14:28.838 #define SPDK_CONFIG_CROSS_PREFIX 00:14:28.838 #undef SPDK_CONFIG_CRYPTO 00:14:28.838 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:28.838 #undef SPDK_CONFIG_CUSTOMOCF 00:14:28.838 #undef SPDK_CONFIG_DAOS 00:14:28.838 #define SPDK_CONFIG_DAOS_DIR 00:14:28.838 #define SPDK_CONFIG_DEBUG 1 00:14:28.838 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:28.838 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:28.838 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:28.838 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:28.838 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:28.838 #undef SPDK_CONFIG_DPDK_UADK 00:14:28.838 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:28.838 #define SPDK_CONFIG_EXAMPLES 1 00:14:28.838 #undef SPDK_CONFIG_FC 00:14:28.838 #define SPDK_CONFIG_FC_PATH 00:14:28.838 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:28.838 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:28.838 #define SPDK_CONFIG_FSDEV 1 00:14:28.838 #undef SPDK_CONFIG_FUSE 00:14:28.838 #undef SPDK_CONFIG_FUZZER 00:14:28.838 #define SPDK_CONFIG_FUZZER_LIB 00:14:28.838 #undef SPDK_CONFIG_GOLANG 00:14:28.838 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:28.838 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:28.838 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:28.838 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:28.838 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:28.838 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:28.838 #undef SPDK_CONFIG_HAVE_LZ4 00:14:28.838 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:28.838 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:28.838 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:28.838 #define SPDK_CONFIG_IDXD 1 00:14:28.838 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:28.838 #undef SPDK_CONFIG_IPSEC_MB 00:14:28.838 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:28.838 #define SPDK_CONFIG_ISAL 1 00:14:28.838 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:28.838 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:28.838 #define SPDK_CONFIG_LIBDIR 00:14:28.838 #undef SPDK_CONFIG_LTO 00:14:28.838 #define SPDK_CONFIG_MAX_LCORES 128 00:14:28.838 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:28.838 #define SPDK_CONFIG_NVME_CUSE 1 00:14:28.838 #undef SPDK_CONFIG_OCF 00:14:28.838 #define SPDK_CONFIG_OCF_PATH 00:14:28.838 #define SPDK_CONFIG_OPENSSL_PATH 00:14:28.838 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:28.838 #define SPDK_CONFIG_PGO_DIR 00:14:28.838 #undef SPDK_CONFIG_PGO_USE 00:14:28.838 #define SPDK_CONFIG_PREFIX /usr/local 00:14:28.838 #undef SPDK_CONFIG_RAID5F 00:14:28.838 #undef SPDK_CONFIG_RBD 00:14:28.838 #define SPDK_CONFIG_RDMA 1 00:14:28.838 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:28.838 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:28.838 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:28.838 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:28.838 #define SPDK_CONFIG_SHARED 1 00:14:28.838 #undef SPDK_CONFIG_SMA 00:14:28.838 #define SPDK_CONFIG_TESTS 1 00:14:28.838 #undef SPDK_CONFIG_TSAN 00:14:28.838 #define SPDK_CONFIG_UBLK 1 00:14:28.838 #define SPDK_CONFIG_UBSAN 1 00:14:28.838 #undef SPDK_CONFIG_UNIT_TESTS 00:14:28.838 #undef SPDK_CONFIG_URING 00:14:28.838 #define SPDK_CONFIG_URING_PATH 00:14:28.838 #undef SPDK_CONFIG_URING_ZNS 00:14:28.838 #undef SPDK_CONFIG_USDT 00:14:28.838 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:28.838 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:28.838 #define SPDK_CONFIG_VFIO_USER 1 00:14:28.838 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:28.838 #define SPDK_CONFIG_VHOST 1 00:14:28.838 #define SPDK_CONFIG_VIRTIO 1 00:14:28.838 #undef SPDK_CONFIG_VTUNE 00:14:28.838 #define SPDK_CONFIG_VTUNE_DIR 00:14:28.838 #define SPDK_CONFIG_WERROR 1 00:14:28.838 #define SPDK_CONFIG_WPDK_DIR 00:14:28.838 #undef SPDK_CONFIG_XNVME 00:14:28.838 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:28.838 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:28.839 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:28.840 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2712713 ]] 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2712713 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:14:28.841 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.0OrGgs 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0OrGgs/tests/target /tmp/spdk.0OrGgs 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:14:28.841 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=118957678592 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356509184 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10398830592 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666886144 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678252544 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871302656 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23367680 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677888000 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=368640 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935634944 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935647232 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:14:28.842 * Looking for test storage... 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=118957678592 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=12613423104 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:14:28.842 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:29.109 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:29.109 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:29.109 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:29.109 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:29.109 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.109 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:29.109 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:29.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.110 --rc genhtml_branch_coverage=1 00:14:29.110 --rc genhtml_function_coverage=1 00:14:29.110 --rc genhtml_legend=1 00:14:29.110 --rc geninfo_all_blocks=1 00:14:29.110 --rc geninfo_unexecuted_blocks=1 00:14:29.110 00:14:29.110 ' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:29.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.110 --rc genhtml_branch_coverage=1 00:14:29.110 --rc genhtml_function_coverage=1 00:14:29.110 --rc genhtml_legend=1 00:14:29.110 --rc geninfo_all_blocks=1 00:14:29.110 --rc geninfo_unexecuted_blocks=1 00:14:29.110 00:14:29.110 ' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:29.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.110 --rc genhtml_branch_coverage=1 00:14:29.110 --rc genhtml_function_coverage=1 00:14:29.110 --rc genhtml_legend=1 00:14:29.110 --rc geninfo_all_blocks=1 00:14:29.110 --rc geninfo_unexecuted_blocks=1 00:14:29.110 00:14:29.110 ' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:29.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.110 --rc genhtml_branch_coverage=1 00:14:29.110 --rc genhtml_function_coverage=1 00:14:29.110 --rc genhtml_legend=1 00:14:29.110 --rc geninfo_all_blocks=1 00:14:29.110 --rc geninfo_unexecuted_blocks=1 00:14:29.110 00:14:29.110 ' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:29.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:14:29.110 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:37.282 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:37.282 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.282 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:37.282 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:37.283 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:37.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:14:37.283 00:14:37.283 --- 10.0.0.2 ping statistics --- 00:14:37.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.283 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:14:37.283 00:14:37.283 --- 10.0.0.1 ping statistics --- 00:14:37.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.283 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:37.283 ************************************ 00:14:37.283 START TEST nvmf_filesystem_no_in_capsule 00:14:37.283 ************************************ 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2716397 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2716397 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2716397 ']' 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:37.283 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.283 [2024-11-20 06:24:56.797468] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:14:37.283 [2024-11-20 06:24:56.797534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.283 [2024-11-20 06:24:56.895315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:37.283 [2024-11-20 06:24:56.948749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.283 [2024-11-20 06:24:56.948803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.283 [2024-11-20 06:24:56.948812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.283 [2024-11-20 06:24:56.948819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.283 [2024-11-20 06:24:56.948825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.283 [2024-11-20 06:24:56.951180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.283 [2024-11-20 06:24:56.951332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.283 [2024-11-20 06:24:56.951574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.283 [2024-11-20 06:24:56.951576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.543 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:37.543 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.544 [2024-11-20 06:24:57.675931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.544 Malloc1 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.544 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.805 [2024-11-20 06:24:57.836398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:37.805 { 00:14:37.805 "name": "Malloc1", 00:14:37.805 "aliases": [ 00:14:37.805 "2d976d75-52ae-41d0-9127-c02f8a067f4b" 00:14:37.805 ], 00:14:37.805 "product_name": "Malloc disk", 00:14:37.805 "block_size": 512, 00:14:37.805 "num_blocks": 1048576, 00:14:37.805 "uuid": "2d976d75-52ae-41d0-9127-c02f8a067f4b", 00:14:37.805 "assigned_rate_limits": { 00:14:37.805 "rw_ios_per_sec": 0, 00:14:37.805 "rw_mbytes_per_sec": 0, 00:14:37.805 "r_mbytes_per_sec": 0, 00:14:37.805 "w_mbytes_per_sec": 0 00:14:37.805 }, 00:14:37.805 "claimed": true, 00:14:37.805 "claim_type": "exclusive_write", 00:14:37.805 "zoned": false, 00:14:37.805 "supported_io_types": { 00:14:37.805 "read": true, 00:14:37.805 "write": true, 00:14:37.805 "unmap": true, 00:14:37.805 "flush": true, 00:14:37.805 "reset": true, 00:14:37.805 "nvme_admin": false, 00:14:37.805 "nvme_io": false, 00:14:37.805 "nvme_io_md": false, 00:14:37.805 "write_zeroes": true, 00:14:37.805 "zcopy": true, 00:14:37.805 "get_zone_info": false, 00:14:37.805 "zone_management": false, 00:14:37.805 "zone_append": false, 00:14:37.805 "compare": false, 00:14:37.805 "compare_and_write": false, 00:14:37.805 "abort": true, 00:14:37.805 "seek_hole": false, 00:14:37.805 "seek_data": false, 00:14:37.805 "copy": true, 00:14:37.805 "nvme_iov_md": false 00:14:37.805 }, 00:14:37.805 "memory_domains": [ 00:14:37.805 { 00:14:37.805 "dma_device_id": "system", 00:14:37.805 "dma_device_type": 1 00:14:37.805 }, 00:14:37.805 { 00:14:37.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.805 "dma_device_type": 2 00:14:37.805 } 00:14:37.805 ], 00:14:37.805 "driver_specific": {} 00:14:37.805 } 00:14:37.805 ]' 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:37.805 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:39.185 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.185 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:14:39.185 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.185 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:39.185 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:41.726 06:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:41.986 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:42.926 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:42.926 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:42.926 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:42.926 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:42.926 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:43.187 ************************************ 00:14:43.187 START TEST filesystem_ext4 00:14:43.187 ************************************ 00:14:43.187 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:43.187 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:43.187 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:43.187 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:43.187 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:14:43.187 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:43.187 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:14:43.187 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:14:43.187 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:14:43.188 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:14:43.188 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:43.188 mke2fs 1.47.0 (5-Feb-2023) 00:14:43.188 Discarding device blocks: 0/522240 done 00:14:43.188 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:43.188 Filesystem UUID: 595ef8ec-0208-4b50-8312-0357168e1c00 00:14:43.188 Superblock backups stored on blocks: 00:14:43.188 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:43.188 00:14:43.188 Allocating group tables: 0/64 done 00:14:43.188 Writing inode tables: 0/64 done 00:14:43.449 Creating journal (8192 blocks): done 00:14:45.663 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:14:45.663 00:14:45.663 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:14:45.663 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:50.947 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:50.947 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:50.947 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:50.947 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:50.947 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:50.947 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2716397 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:51.208 00:14:51.208 real 0m8.026s 00:14:51.208 user 0m0.033s 00:14:51.208 sys 0m0.075s 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:51.208 ************************************ 00:14:51.208 END TEST filesystem_ext4 00:14:51.208 ************************************ 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:51.208 ************************************ 00:14:51.208 START TEST filesystem_btrfs 00:14:51.208 ************************************ 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:14:51.208 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:51.468 btrfs-progs v6.8.1 00:14:51.468 See https://btrfs.readthedocs.io for more information. 00:14:51.468 00:14:51.468 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:51.468 NOTE: several default settings have changed in version 5.15, please make sure 00:14:51.468 this does not affect your deployments: 00:14:51.468 - DUP for metadata (-m dup) 00:14:51.469 - enabled no-holes (-O no-holes) 00:14:51.469 - enabled free-space-tree (-R free-space-tree) 00:14:51.469 00:14:51.469 Label: (null) 00:14:51.469 UUID: b3887f86-0fe4-434b-98ed-721af32d4030 00:14:51.469 Node size: 16384 00:14:51.469 Sector size: 4096 (CPU page size: 4096) 00:14:51.469 Filesystem size: 510.00MiB 00:14:51.469 Block group profiles: 00:14:51.469 Data: single 8.00MiB 00:14:51.469 Metadata: DUP 32.00MiB 00:14:51.469 System: DUP 8.00MiB 00:14:51.469 SSD detected: yes 00:14:51.469 Zoned device: no 00:14:51.469 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:51.469 Checksum: crc32c 00:14:51.469 Number of devices: 1 00:14:51.469 Devices: 00:14:51.469 ID SIZE PATH 00:14:51.469 1 510.00MiB /dev/nvme0n1p1 00:14:51.469 00:14:51.469 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:14:51.469 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:52.409 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:52.409 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:52.409 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:52.409 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:52.409 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:52.409 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2716397 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:52.410 00:14:52.410 real 0m1.266s 00:14:52.410 user 0m0.026s 00:14:52.410 sys 0m0.121s 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:52.410 ************************************ 00:14:52.410 END TEST filesystem_btrfs 00:14:52.410 ************************************ 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:52.410 ************************************ 00:14:52.410 START TEST filesystem_xfs 00:14:52.410 ************************************ 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:14:52.410 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:52.671 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:52.671 = sectsz=512 attr=2, projid32bit=1 00:14:52.671 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:52.671 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:52.671 data = bsize=4096 blocks=130560, imaxpct=25 00:14:52.671 = sunit=0 swidth=0 blks 00:14:52.671 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:52.671 log =internal log bsize=4096 blocks=16384, version=2 00:14:52.671 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:52.671 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:53.611 Discarding blocks...Done. 00:14:53.611 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:14:53.611 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2716397 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:55.525 00:14:55.525 real 0m2.802s 00:14:55.525 user 0m0.023s 00:14:55.525 sys 0m0.081s 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:55.525 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:55.525 ************************************ 00:14:55.525 END TEST filesystem_xfs 00:14:55.526 ************************************ 00:14:55.526 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:55.786 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2716397 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2716397 ']' 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2716397 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:56.048 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2716397 00:14:56.310 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:56.310 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:56.310 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2716397' 00:14:56.310 killing process with pid 2716397 00:14:56.310 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 2716397 00:14:56.310 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 2716397 00:14:56.310 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:56.310 00:14:56.310 real 0m19.837s 00:14:56.310 user 1m18.407s 00:14:56.310 sys 0m1.446s 00:14:56.310 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:56.310 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:56.310 ************************************ 00:14:56.310 END TEST nvmf_filesystem_no_in_capsule 00:14:56.310 ************************************ 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:56.570 ************************************ 00:14:56.570 START TEST nvmf_filesystem_in_capsule 00:14:56.570 ************************************ 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2720592 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2720592 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2720592 ']' 00:14:56.570 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.571 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:56.571 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.571 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:56.571 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:56.571 [2024-11-20 06:25:16.715003] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:14:56.571 [2024-11-20 06:25:16.715052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.571 [2024-11-20 06:25:16.805588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.571 [2024-11-20 06:25:16.836854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.571 [2024-11-20 06:25:16.836883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.571 [2024-11-20 06:25:16.836889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.571 [2024-11-20 06:25:16.836893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.571 [2024-11-20 06:25:16.836898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.571 [2024-11-20 06:25:16.838254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.571 [2024-11-20 06:25:16.838407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.571 [2024-11-20 06:25:16.838519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.571 [2024-11-20 06:25:16.838521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.512 [2024-11-20 06:25:17.568403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.512 Malloc1 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.512 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.513 [2024-11-20 06:25:17.706484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:57.513 { 00:14:57.513 "name": "Malloc1", 00:14:57.513 "aliases": [ 00:14:57.513 "74ab7348-a16d-4f48-ab1a-70e50b8d7f43" 00:14:57.513 ], 00:14:57.513 "product_name": "Malloc disk", 00:14:57.513 "block_size": 512, 00:14:57.513 "num_blocks": 1048576, 00:14:57.513 "uuid": "74ab7348-a16d-4f48-ab1a-70e50b8d7f43", 00:14:57.513 "assigned_rate_limits": { 00:14:57.513 "rw_ios_per_sec": 0, 00:14:57.513 "rw_mbytes_per_sec": 0, 00:14:57.513 "r_mbytes_per_sec": 0, 00:14:57.513 "w_mbytes_per_sec": 0 00:14:57.513 }, 00:14:57.513 "claimed": true, 00:14:57.513 "claim_type": "exclusive_write", 00:14:57.513 "zoned": false, 00:14:57.513 "supported_io_types": { 00:14:57.513 "read": true, 00:14:57.513 "write": true, 00:14:57.513 "unmap": true, 00:14:57.513 "flush": true, 00:14:57.513 "reset": true, 00:14:57.513 "nvme_admin": false, 00:14:57.513 "nvme_io": false, 00:14:57.513 "nvme_io_md": false, 00:14:57.513 "write_zeroes": true, 00:14:57.513 "zcopy": true, 00:14:57.513 "get_zone_info": false, 00:14:57.513 "zone_management": false, 00:14:57.513 "zone_append": false, 00:14:57.513 "compare": false, 00:14:57.513 "compare_and_write": false, 00:14:57.513 "abort": true, 00:14:57.513 "seek_hole": false, 00:14:57.513 "seek_data": false, 00:14:57.513 "copy": true, 00:14:57.513 "nvme_iov_md": false 00:14:57.513 }, 00:14:57.513 "memory_domains": [ 00:14:57.513 { 00:14:57.513 "dma_device_id": "system", 00:14:57.513 "dma_device_type": 1 00:14:57.513 }, 00:14:57.513 { 00:14:57.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.513 "dma_device_type": 2 00:14:57.513 } 00:14:57.513 ], 00:14:57.513 "driver_specific": {} 00:14:57.513 } 00:14:57.513 ]' 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:14:57.513 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:14:57.774 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:14:57.774 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:14:57.774 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:14:57.774 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:57.774 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.156 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:59.157 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:14:59.157 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.157 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:59.157 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:15:01.131 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:01.131 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:01.131 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:01.393 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:01.653 06:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:02.226 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:03.173 ************************************ 00:15:03.173 START TEST filesystem_in_capsule_ext4 00:15:03.173 ************************************ 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:15:03.173 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:03.173 mke2fs 1.47.0 (5-Feb-2023) 00:15:03.173 Discarding device blocks: 0/522240 done 00:15:03.173 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:03.173 Filesystem UUID: 6b54fc97-8c71-44bd-bd3d-e2bfa6746900 00:15:03.173 Superblock backups stored on blocks: 00:15:03.173 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:03.174 00:15:03.174 Allocating group tables: 0/64 done 00:15:03.174 Writing inode tables: 0/64 done 00:15:06.475 Creating journal (8192 blocks): done 00:15:06.475 Writing superblocks and filesystem accounting information: 0/64 done 00:15:06.475 00:15:06.475 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:15:06.475 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2720592 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:13.076 00:15:13.076 real 0m9.033s 00:15:13.076 user 0m0.039s 00:15:13.076 sys 0m0.070s 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:13.076 ************************************ 00:15:13.076 END TEST filesystem_in_capsule_ext4 00:15:13.076 ************************************ 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:13.076 ************************************ 00:15:13.076 START TEST filesystem_in_capsule_btrfs 00:15:13.076 ************************************ 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:15:13.076 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:13.076 btrfs-progs v6.8.1 00:15:13.076 See https://btrfs.readthedocs.io for more information. 00:15:13.076 00:15:13.076 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:13.076 NOTE: several default settings have changed in version 5.15, please make sure 00:15:13.076 this does not affect your deployments: 00:15:13.076 - DUP for metadata (-m dup) 00:15:13.076 - enabled no-holes (-O no-holes) 00:15:13.076 - enabled free-space-tree (-R free-space-tree) 00:15:13.076 00:15:13.076 Label: (null) 00:15:13.076 UUID: e12eab4a-10ed-482b-a190-ae5754765689 00:15:13.076 Node size: 16384 00:15:13.076 Sector size: 4096 (CPU page size: 4096) 00:15:13.076 Filesystem size: 510.00MiB 00:15:13.076 Block group profiles: 00:15:13.076 Data: single 8.00MiB 00:15:13.076 Metadata: DUP 32.00MiB 00:15:13.076 System: DUP 8.00MiB 00:15:13.076 SSD detected: yes 00:15:13.076 Zoned device: no 00:15:13.076 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:13.076 Checksum: crc32c 00:15:13.077 Number of devices: 1 00:15:13.077 Devices: 00:15:13.077 ID SIZE PATH 00:15:13.077 1 510.00MiB /dev/nvme0n1p1 00:15:13.077 00:15:13.077 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:15:13.077 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2720592 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:13.077 00:15:13.077 real 0m0.769s 00:15:13.077 user 0m0.030s 00:15:13.077 sys 0m0.118s 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:13.077 ************************************ 00:15:13.077 END TEST filesystem_in_capsule_btrfs 00:15:13.077 ************************************ 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:13.077 ************************************ 00:15:13.077 START TEST filesystem_in_capsule_xfs 00:15:13.077 ************************************ 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:15:13.077 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:13.077 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:13.077 = sectsz=512 attr=2, projid32bit=1 00:15:13.077 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:13.077 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:13.077 data = bsize=4096 blocks=130560, imaxpct=25 00:15:13.077 = sunit=0 swidth=0 blks 00:15:13.077 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:13.077 log =internal log bsize=4096 blocks=16384, version=2 00:15:13.077 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:13.077 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:14.467 Discarding blocks...Done. 00:15:14.467 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:15:14.467 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2720592 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:17.017 00:15:17.017 real 0m3.659s 00:15:17.017 user 0m0.030s 00:15:17.017 sys 0m0.075s 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:17.017 ************************************ 00:15:17.017 END TEST filesystem_in_capsule_xfs 00:15:17.017 ************************************ 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:17.017 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2720592 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2720592 ']' 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2720592 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2720592 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2720592' 00:15:17.017 killing process with pid 2720592 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 2720592 00:15:17.017 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 2720592 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:17.279 00:15:17.279 real 0m20.778s 00:15:17.279 user 1m22.289s 00:15:17.279 sys 0m1.398s 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:17.279 ************************************ 00:15:17.279 END TEST nvmf_filesystem_in_capsule 00:15:17.279 ************************************ 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:17.279 rmmod nvme_tcp 00:15:17.279 rmmod nvme_fabrics 00:15:17.279 rmmod nvme_keyring 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:17.279 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:15:17.540 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:17.540 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:17.540 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.540 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.540 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.454 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:19.454 00:15:19.454 real 0m50.949s 00:15:19.454 user 2m43.150s 00:15:19.454 sys 0m8.700s 00:15:19.454 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:19.454 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:19.454 ************************************ 00:15:19.454 END TEST nvmf_filesystem 00:15:19.454 ************************************ 00:15:19.454 06:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:19.454 06:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:19.454 06:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:19.454 06:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:19.454 ************************************ 00:15:19.454 START TEST nvmf_target_discovery 00:15:19.454 ************************************ 00:15:19.454 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:19.715 * Looking for test storage... 00:15:19.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:19.715 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.716 --rc genhtml_branch_coverage=1 00:15:19.716 --rc genhtml_function_coverage=1 00:15:19.716 --rc genhtml_legend=1 00:15:19.716 --rc geninfo_all_blocks=1 00:15:19.716 --rc geninfo_unexecuted_blocks=1 00:15:19.716 00:15:19.716 ' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.716 --rc genhtml_branch_coverage=1 00:15:19.716 --rc genhtml_function_coverage=1 00:15:19.716 --rc genhtml_legend=1 00:15:19.716 --rc geninfo_all_blocks=1 00:15:19.716 --rc geninfo_unexecuted_blocks=1 00:15:19.716 00:15:19.716 ' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.716 --rc genhtml_branch_coverage=1 00:15:19.716 --rc genhtml_function_coverage=1 00:15:19.716 --rc genhtml_legend=1 00:15:19.716 --rc geninfo_all_blocks=1 00:15:19.716 --rc geninfo_unexecuted_blocks=1 00:15:19.716 00:15:19.716 ' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.716 --rc genhtml_branch_coverage=1 00:15:19.716 --rc genhtml_function_coverage=1 00:15:19.716 --rc genhtml_legend=1 00:15:19.716 --rc geninfo_all_blocks=1 00:15:19.716 --rc geninfo_unexecuted_blocks=1 00:15:19.716 00:15:19.716 ' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:19.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:15:19.716 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:15:27.861 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:27.862 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:27.862 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:27.862 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:27.862 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:27.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:15:27.862 00:15:27.862 --- 10.0.0.2 ping statistics --- 00:15:27.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.862 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:27.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:15:27.862 00:15:27.862 --- 10.0.0.1 ping statistics --- 00:15:27.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.862 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.862 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2729068 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2729068 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 2729068 ']' 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:27.863 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.863 [2024-11-20 06:25:47.558913] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:15:27.863 [2024-11-20 06:25:47.558979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.863 [2024-11-20 06:25:47.656859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:27.863 [2024-11-20 06:25:47.710337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.863 [2024-11-20 06:25:47.710390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.863 [2024-11-20 06:25:47.710398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.863 [2024-11-20 06:25:47.710406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.863 [2024-11-20 06:25:47.710412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.863 [2024-11-20 06:25:47.712701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.863 [2024-11-20 06:25:47.712863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.863 [2024-11-20 06:25:47.713029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.863 [2024-11-20 06:25:47.713032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.124 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:28.124 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:28.124 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:28.124 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:28.124 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.386 [2024-11-20 06:25:48.438346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.386 Null1 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.386 [2024-11-20 06:25:48.506376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.386 Null2 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.386 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.387 Null3 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.387 Null4 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.387 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.649 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.649 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:28.649 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.649 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.649 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.649 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:28.649 00:15:28.649 Discovery Log Number of Records 6, Generation counter 6 00:15:28.649 =====Discovery Log Entry 0====== 00:15:28.649 trtype: tcp 00:15:28.649 adrfam: ipv4 00:15:28.649 subtype: current discovery subsystem 00:15:28.649 treq: not required 00:15:28.649 portid: 0 00:15:28.649 trsvcid: 4420 00:15:28.649 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:28.649 traddr: 10.0.0.2 00:15:28.649 eflags: explicit discovery connections, duplicate discovery information 00:15:28.649 sectype: none 00:15:28.649 =====Discovery Log Entry 1====== 00:15:28.649 trtype: tcp 00:15:28.649 adrfam: ipv4 00:15:28.649 subtype: nvme subsystem 00:15:28.649 treq: not required 00:15:28.649 portid: 0 00:15:28.649 trsvcid: 4420 00:15:28.649 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:28.649 traddr: 10.0.0.2 00:15:28.649 eflags: none 00:15:28.649 sectype: none 00:15:28.649 =====Discovery Log Entry 2====== 00:15:28.649 trtype: tcp 00:15:28.649 adrfam: ipv4 00:15:28.649 subtype: nvme subsystem 00:15:28.649 treq: not required 00:15:28.649 portid: 0 00:15:28.649 trsvcid: 4420 00:15:28.649 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:28.649 traddr: 10.0.0.2 00:15:28.649 eflags: none 00:15:28.649 sectype: none 00:15:28.649 =====Discovery Log Entry 3====== 00:15:28.649 trtype: tcp 00:15:28.649 adrfam: ipv4 00:15:28.649 subtype: nvme subsystem 00:15:28.649 treq: not required 00:15:28.649 portid: 0 00:15:28.649 trsvcid: 4420 00:15:28.649 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:28.649 traddr: 10.0.0.2 00:15:28.649 eflags: none 00:15:28.649 sectype: none 00:15:28.649 =====Discovery Log Entry 4====== 00:15:28.649 trtype: tcp 00:15:28.649 adrfam: ipv4 00:15:28.649 subtype: nvme subsystem 00:15:28.649 treq: not required 00:15:28.649 portid: 0 00:15:28.649 trsvcid: 4420 00:15:28.649 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:28.649 traddr: 10.0.0.2 00:15:28.649 eflags: none 00:15:28.649 sectype: none 00:15:28.649 =====Discovery Log Entry 5====== 00:15:28.649 trtype: tcp 00:15:28.649 adrfam: ipv4 00:15:28.649 subtype: discovery subsystem referral 00:15:28.649 treq: not required 00:15:28.649 portid: 0 00:15:28.649 trsvcid: 4430 00:15:28.649 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:28.649 traddr: 10.0.0.2 00:15:28.649 eflags: none 00:15:28.649 sectype: none 00:15:28.649 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:28.649 Perform nvmf subsystem discovery via RPC 00:15:28.649 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:28.649 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.649 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.649 [ 00:15:28.649 { 00:15:28.649 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:28.649 "subtype": "Discovery", 00:15:28.649 "listen_addresses": [ 00:15:28.649 { 00:15:28.649 "trtype": "TCP", 00:15:28.649 "adrfam": "IPv4", 00:15:28.649 "traddr": "10.0.0.2", 00:15:28.649 "trsvcid": "4420" 00:15:28.649 } 00:15:28.649 ], 00:15:28.649 "allow_any_host": true, 00:15:28.649 "hosts": [] 00:15:28.649 }, 00:15:28.649 { 00:15:28.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.649 "subtype": "NVMe", 00:15:28.649 "listen_addresses": [ 00:15:28.649 { 00:15:28.649 "trtype": "TCP", 00:15:28.649 "adrfam": "IPv4", 00:15:28.649 "traddr": "10.0.0.2", 00:15:28.649 "trsvcid": "4420" 00:15:28.649 } 00:15:28.649 ], 00:15:28.649 "allow_any_host": true, 00:15:28.649 "hosts": [], 00:15:28.649 "serial_number": "SPDK00000000000001", 00:15:28.649 "model_number": "SPDK bdev Controller", 00:15:28.650 "max_namespaces": 32, 00:15:28.650 "min_cntlid": 1, 00:15:28.650 "max_cntlid": 65519, 00:15:28.650 "namespaces": [ 00:15:28.650 { 00:15:28.650 "nsid": 1, 00:15:28.650 "bdev_name": "Null1", 00:15:28.650 "name": "Null1", 00:15:28.650 "nguid": "97BD1B2F07FF454CBCACCB215FFA0E2A", 00:15:28.650 "uuid": "97bd1b2f-07ff-454c-bcac-cb215ffa0e2a" 00:15:28.650 } 00:15:28.650 ] 00:15:28.650 }, 00:15:28.650 { 00:15:28.650 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:28.650 "subtype": "NVMe", 00:15:28.650 "listen_addresses": [ 00:15:28.650 { 00:15:28.650 "trtype": "TCP", 00:15:28.650 "adrfam": "IPv4", 00:15:28.650 "traddr": "10.0.0.2", 00:15:28.650 "trsvcid": "4420" 00:15:28.650 } 00:15:28.650 ], 00:15:28.650 "allow_any_host": true, 00:15:28.650 "hosts": [], 00:15:28.650 "serial_number": "SPDK00000000000002", 00:15:28.650 "model_number": "SPDK bdev Controller", 00:15:28.650 "max_namespaces": 32, 00:15:28.650 "min_cntlid": 1, 00:15:28.650 "max_cntlid": 65519, 00:15:28.650 "namespaces": [ 00:15:28.650 { 00:15:28.650 "nsid": 1, 00:15:28.650 "bdev_name": "Null2", 00:15:28.650 "name": "Null2", 00:15:28.650 "nguid": "7B98C674DEAB4C85926EA1067CE032AB", 00:15:28.650 "uuid": "7b98c674-deab-4c85-926e-a1067ce032ab" 00:15:28.650 } 00:15:28.650 ] 00:15:28.650 }, 00:15:28.650 { 00:15:28.650 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:28.650 "subtype": "NVMe", 00:15:28.650 "listen_addresses": [ 00:15:28.650 { 00:15:28.650 "trtype": "TCP", 00:15:28.650 "adrfam": "IPv4", 00:15:28.650 "traddr": "10.0.0.2", 00:15:28.650 "trsvcid": "4420" 00:15:28.650 } 00:15:28.650 ], 00:15:28.650 "allow_any_host": true, 00:15:28.650 "hosts": [], 00:15:28.650 "serial_number": "SPDK00000000000003", 00:15:28.650 "model_number": "SPDK bdev Controller", 00:15:28.650 "max_namespaces": 32, 00:15:28.650 "min_cntlid": 1, 00:15:28.650 "max_cntlid": 65519, 00:15:28.650 "namespaces": [ 00:15:28.650 { 00:15:28.650 "nsid": 1, 00:15:28.650 "bdev_name": "Null3", 00:15:28.650 "name": "Null3", 00:15:28.650 "nguid": "96C42AF76DF04DCAA1813E1EB58A542A", 00:15:28.912 "uuid": "96c42af7-6df0-4dca-a181-3e1eb58a542a" 00:15:28.912 } 00:15:28.912 ] 00:15:28.912 }, 00:15:28.912 { 00:15:28.912 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:28.912 "subtype": "NVMe", 00:15:28.912 "listen_addresses": [ 00:15:28.912 { 00:15:28.912 "trtype": "TCP", 00:15:28.912 "adrfam": "IPv4", 00:15:28.912 "traddr": "10.0.0.2", 00:15:28.912 "trsvcid": "4420" 00:15:28.912 } 00:15:28.912 ], 00:15:28.912 "allow_any_host": true, 00:15:28.912 "hosts": [], 00:15:28.912 "serial_number": "SPDK00000000000004", 00:15:28.912 "model_number": "SPDK bdev Controller", 00:15:28.912 "max_namespaces": 32, 00:15:28.912 "min_cntlid": 1, 00:15:28.912 "max_cntlid": 65519, 00:15:28.912 "namespaces": [ 00:15:28.912 { 00:15:28.912 "nsid": 1, 00:15:28.912 "bdev_name": "Null4", 00:15:28.912 "name": "Null4", 00:15:28.912 "nguid": "10676F2E1A224FC6802E94D934211258", 00:15:28.912 "uuid": "10676f2e-1a22-4fc6-802e-94d934211258" 00:15:28.912 } 00:15:28.912 ] 00:15:28.912 } 00:15:28.912 ] 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.913 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:28.913 rmmod nvme_tcp 00:15:28.913 rmmod nvme_fabrics 00:15:28.913 rmmod nvme_keyring 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2729068 ']' 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2729068 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 2729068 ']' 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 2729068 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:28.913 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2729068 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2729068' 00:15:29.174 killing process with pid 2729068 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 2729068 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 2729068 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.174 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:31.723 00:15:31.723 real 0m11.774s 00:15:31.723 user 0m9.055s 00:15:31.723 sys 0m6.196s 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.723 ************************************ 00:15:31.723 END TEST nvmf_target_discovery 00:15:31.723 ************************************ 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.723 ************************************ 00:15:31.723 START TEST nvmf_referrals 00:15:31.723 ************************************ 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:31.723 * Looking for test storage... 00:15:31.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:31.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.723 --rc genhtml_branch_coverage=1 00:15:31.723 --rc genhtml_function_coverage=1 00:15:31.723 --rc genhtml_legend=1 00:15:31.723 --rc geninfo_all_blocks=1 00:15:31.723 --rc geninfo_unexecuted_blocks=1 00:15:31.723 00:15:31.723 ' 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:31.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.723 --rc genhtml_branch_coverage=1 00:15:31.723 --rc genhtml_function_coverage=1 00:15:31.723 --rc genhtml_legend=1 00:15:31.723 --rc geninfo_all_blocks=1 00:15:31.723 --rc geninfo_unexecuted_blocks=1 00:15:31.723 00:15:31.723 ' 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:31.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.723 --rc genhtml_branch_coverage=1 00:15:31.723 --rc genhtml_function_coverage=1 00:15:31.723 --rc genhtml_legend=1 00:15:31.723 --rc geninfo_all_blocks=1 00:15:31.723 --rc geninfo_unexecuted_blocks=1 00:15:31.723 00:15:31.723 ' 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:31.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.723 --rc genhtml_branch_coverage=1 00:15:31.723 --rc genhtml_function_coverage=1 00:15:31.723 --rc genhtml_legend=1 00:15:31.723 --rc geninfo_all_blocks=1 00:15:31.723 --rc geninfo_unexecuted_blocks=1 00:15:31.723 00:15:31.723 ' 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.723 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:15:31.724 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:39.954 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:39.954 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.954 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:39.955 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:39.955 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.955 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:39.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:15:39.955 00:15:39.955 --- 10.0.0.2 ping statistics --- 00:15:39.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.955 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:15:39.955 00:15:39.955 --- 10.0.0.1 ping statistics --- 00:15:39.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.955 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2733558 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2733558 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 2733558 ']' 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:39.955 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:39.955 [2024-11-20 06:25:59.392898] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:15:39.955 [2024-11-20 06:25:59.392966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.955 [2024-11-20 06:25:59.490388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.955 [2024-11-20 06:25:59.543484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.955 [2024-11-20 06:25:59.543536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.955 [2024-11-20 06:25:59.543545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.955 [2024-11-20 06:25:59.543552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.955 [2024-11-20 06:25:59.543558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.955 [2024-11-20 06:25:59.545933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.955 [2024-11-20 06:25:59.546092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.955 [2024-11-20 06:25:59.546292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.955 [2024-11-20 06:25:59.546294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.955 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:39.955 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:15:39.955 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:39.955 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:39.955 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.217 [2024-11-20 06:26:00.271922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.217 [2024-11-20 06:26:00.297430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:40.217 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:40.479 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:40.740 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:41.002 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:41.002 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:41.002 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:41.002 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:41.002 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:41.002 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:41.002 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:41.263 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:41.263 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:41.263 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:41.263 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:41.263 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:41.263 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:41.264 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.525 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:41.525 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:41.525 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:41.525 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:41.525 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:41.526 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:41.526 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:41.526 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:41.526 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:41.526 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:41.526 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:41.526 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:41.526 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:41.526 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:41.526 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:41.786 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:41.786 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:41.786 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:41.786 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:41.786 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:41.786 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:42.047 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:42.307 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:42.307 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:42.307 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:42.307 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:15:42.307 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:42.307 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:15:42.307 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:42.308 rmmod nvme_tcp 00:15:42.308 rmmod nvme_fabrics 00:15:42.308 rmmod nvme_keyring 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2733558 ']' 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2733558 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 2733558 ']' 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 2733558 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2733558 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2733558' 00:15:42.308 killing process with pid 2733558 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 2733558 00:15:42.308 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 2733558 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.568 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.482 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:44.482 00:15:44.482 real 0m13.103s 00:15:44.482 user 0m15.321s 00:15:44.482 sys 0m6.522s 00:15:44.482 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:44.482 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:44.482 ************************************ 00:15:44.482 END TEST nvmf_referrals 00:15:44.482 ************************************ 00:15:44.482 06:26:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:44.482 06:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:44.482 06:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:44.482 06:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.482 ************************************ 00:15:44.482 START TEST nvmf_connect_disconnect 00:15:44.482 ************************************ 00:15:44.482 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:44.745 * Looking for test storage... 00:15:44.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:15:44.745 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.746 --rc genhtml_branch_coverage=1 00:15:44.746 --rc genhtml_function_coverage=1 00:15:44.746 --rc genhtml_legend=1 00:15:44.746 --rc geninfo_all_blocks=1 00:15:44.746 --rc geninfo_unexecuted_blocks=1 00:15:44.746 00:15:44.746 ' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.746 --rc genhtml_branch_coverage=1 00:15:44.746 --rc genhtml_function_coverage=1 00:15:44.746 --rc genhtml_legend=1 00:15:44.746 --rc geninfo_all_blocks=1 00:15:44.746 --rc geninfo_unexecuted_blocks=1 00:15:44.746 00:15:44.746 ' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.746 --rc genhtml_branch_coverage=1 00:15:44.746 --rc genhtml_function_coverage=1 00:15:44.746 --rc genhtml_legend=1 00:15:44.746 --rc geninfo_all_blocks=1 00:15:44.746 --rc geninfo_unexecuted_blocks=1 00:15:44.746 00:15:44.746 ' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.746 --rc genhtml_branch_coverage=1 00:15:44.746 --rc genhtml_function_coverage=1 00:15:44.746 --rc genhtml_legend=1 00:15:44.746 --rc geninfo_all_blocks=1 00:15:44.746 --rc geninfo_unexecuted_blocks=1 00:15:44.746 00:15:44.746 ' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:44.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.746 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:44.747 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:44.747 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:15:44.747 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.891 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:52.892 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:52.892 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:52.892 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:52.892 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:52.892 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:52.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:15:52.892 00:15:52.892 --- 10.0.0.2 ping statistics --- 00:15:52.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.892 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:15:52.893 00:15:52.893 --- 10.0.0.1 ping statistics --- 00:15:52.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.893 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2738565 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2738565 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 2738565 ']' 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:52.893 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:52.893 [2024-11-20 06:26:12.572586] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:15:52.893 [2024-11-20 06:26:12.572653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.893 [2024-11-20 06:26:12.672071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.893 [2024-11-20 06:26:12.725350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.893 [2024-11-20 06:26:12.725403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.893 [2024-11-20 06:26:12.725412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.893 [2024-11-20 06:26:12.725419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.893 [2024-11-20 06:26:12.725425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.893 [2024-11-20 06:26:12.727451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.893 [2024-11-20 06:26:12.727612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.893 [2024-11-20 06:26:12.727773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.893 [2024-11-20 06:26:12.727774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.154 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:53.154 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:15:53.154 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.154 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:53.154 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:53.415 [2024-11-20 06:26:13.453189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:53.415 [2024-11-20 06:26:13.531536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:53.415 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:57.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:11.730 rmmod nvme_tcp 00:16:11.730 rmmod nvme_fabrics 00:16:11.730 rmmod nvme_keyring 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2738565 ']' 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2738565 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2738565 ']' 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 2738565 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2738565 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2738565' 00:16:11.730 killing process with pid 2738565 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 2738565 00:16:11.730 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 2738565 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.991 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.996 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:13.996 00:16:13.996 real 0m29.390s 00:16:13.996 user 1m19.159s 00:16:13.996 sys 0m7.201s 00:16:13.996 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:13.996 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:13.996 ************************************ 00:16:13.996 END TEST nvmf_connect_disconnect 00:16:13.996 ************************************ 00:16:13.996 06:26:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:13.996 06:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:13.996 06:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:13.996 06:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.996 ************************************ 00:16:13.996 START TEST nvmf_multitarget 00:16:13.996 ************************************ 00:16:13.996 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:14.257 * Looking for test storage... 00:16:14.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:14.257 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:14.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.258 --rc genhtml_branch_coverage=1 00:16:14.258 --rc genhtml_function_coverage=1 00:16:14.258 --rc genhtml_legend=1 00:16:14.258 --rc geninfo_all_blocks=1 00:16:14.258 --rc geninfo_unexecuted_blocks=1 00:16:14.258 00:16:14.258 ' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:14.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.258 --rc genhtml_branch_coverage=1 00:16:14.258 --rc genhtml_function_coverage=1 00:16:14.258 --rc genhtml_legend=1 00:16:14.258 --rc geninfo_all_blocks=1 00:16:14.258 --rc geninfo_unexecuted_blocks=1 00:16:14.258 00:16:14.258 ' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:14.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.258 --rc genhtml_branch_coverage=1 00:16:14.258 --rc genhtml_function_coverage=1 00:16:14.258 --rc genhtml_legend=1 00:16:14.258 --rc geninfo_all_blocks=1 00:16:14.258 --rc geninfo_unexecuted_blocks=1 00:16:14.258 00:16:14.258 ' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:14.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.258 --rc genhtml_branch_coverage=1 00:16:14.258 --rc genhtml_function_coverage=1 00:16:14.258 --rc genhtml_legend=1 00:16:14.258 --rc geninfo_all_blocks=1 00:16:14.258 --rc geninfo_unexecuted_blocks=1 00:16:14.258 00:16:14.258 ' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:14.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:14.258 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:22.400 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:22.400 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.400 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:22.401 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:22.401 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:22.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:16:22.401 00:16:22.401 --- 10.0.0.2 ping statistics --- 00:16:22.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.401 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:16:22.401 00:16:22.401 --- 10.0.0.1 ping statistics --- 00:16:22.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.401 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2746458 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2746458 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 2746458 ']' 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:22.401 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:22.401 [2024-11-20 06:26:41.862489] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:16:22.401 [2024-11-20 06:26:41.862557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.401 [2024-11-20 06:26:41.964667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.401 [2024-11-20 06:26:42.018036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.401 [2024-11-20 06:26:42.018091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.401 [2024-11-20 06:26:42.018100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.401 [2024-11-20 06:26:42.018107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.401 [2024-11-20 06:26:42.018114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.401 [2024-11-20 06:26:42.020510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.401 [2024-11-20 06:26:42.020675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.401 [2024-11-20 06:26:42.020838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.401 [2024-11-20 06:26:42.020840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.401 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:22.401 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:16:22.401 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:22.401 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:22.401 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:22.662 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.662 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:22.662 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:22.662 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:22.662 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:22.662 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:22.662 "nvmf_tgt_1" 00:16:22.662 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:22.923 "nvmf_tgt_2" 00:16:22.923 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:22.923 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:22.923 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:22.923 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:23.184 true 00:16:23.184 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:23.184 true 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:23.185 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:23.185 rmmod nvme_tcp 00:16:23.446 rmmod nvme_fabrics 00:16:23.446 rmmod nvme_keyring 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2746458 ']' 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2746458 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 2746458 ']' 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 2746458 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2746458 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2746458' 00:16:23.446 killing process with pid 2746458 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 2746458 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 2746458 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.446 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:25.996 00:16:25.996 real 0m11.562s 00:16:25.996 user 0m9.688s 00:16:25.996 sys 0m5.999s 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:25.996 ************************************ 00:16:25.996 END TEST nvmf_multitarget 00:16:25.996 ************************************ 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.996 ************************************ 00:16:25.996 START TEST nvmf_rpc 00:16:25.996 ************************************ 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:25.996 * Looking for test storage... 00:16:25.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:25.996 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.996 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:25.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.997 --rc genhtml_branch_coverage=1 00:16:25.997 --rc genhtml_function_coverage=1 00:16:25.997 --rc genhtml_legend=1 00:16:25.997 --rc geninfo_all_blocks=1 00:16:25.997 --rc geninfo_unexecuted_blocks=1 00:16:25.997 00:16:25.997 ' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:25.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.997 --rc genhtml_branch_coverage=1 00:16:25.997 --rc genhtml_function_coverage=1 00:16:25.997 --rc genhtml_legend=1 00:16:25.997 --rc geninfo_all_blocks=1 00:16:25.997 --rc geninfo_unexecuted_blocks=1 00:16:25.997 00:16:25.997 ' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:25.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.997 --rc genhtml_branch_coverage=1 00:16:25.997 --rc genhtml_function_coverage=1 00:16:25.997 --rc genhtml_legend=1 00:16:25.997 --rc geninfo_all_blocks=1 00:16:25.997 --rc geninfo_unexecuted_blocks=1 00:16:25.997 00:16:25.997 ' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:25.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.997 --rc genhtml_branch_coverage=1 00:16:25.997 --rc genhtml_function_coverage=1 00:16:25.997 --rc genhtml_legend=1 00:16:25.997 --rc geninfo_all_blocks=1 00:16:25.997 --rc geninfo_unexecuted_blocks=1 00:16:25.997 00:16:25.997 ' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:25.997 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:34.138 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:34.138 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:34.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:34.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:34.138 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:34.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:16:34.139 00:16:34.139 --- 10.0.0.2 ping statistics --- 00:16:34.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.139 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:16:34.139 00:16:34.139 --- 10.0.0.1 ping statistics --- 00:16:34.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.139 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2751156 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2751156 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 2751156 ']' 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:34.139 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.139 [2024-11-20 06:26:53.720514] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:16:34.139 [2024-11-20 06:26:53.720582] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.139 [2024-11-20 06:26:53.819826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.139 [2024-11-20 06:26:53.873696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.139 [2024-11-20 06:26:53.873752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.139 [2024-11-20 06:26:53.873761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.139 [2024-11-20 06:26:53.873768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.139 [2024-11-20 06:26:53.873774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.139 [2024-11-20 06:26:53.875760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.139 [2024-11-20 06:26:53.875922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.139 [2024-11-20 06:26:53.876072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.139 [2024-11-20 06:26:53.876072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:34.401 "tick_rate": 2400000000, 00:16:34.401 "poll_groups": [ 00:16:34.401 { 00:16:34.401 "name": "nvmf_tgt_poll_group_000", 00:16:34.401 "admin_qpairs": 0, 00:16:34.401 "io_qpairs": 0, 00:16:34.401 "current_admin_qpairs": 0, 00:16:34.401 "current_io_qpairs": 0, 00:16:34.401 "pending_bdev_io": 0, 00:16:34.401 "completed_nvme_io": 0, 00:16:34.401 "transports": [] 00:16:34.401 }, 00:16:34.401 { 00:16:34.401 "name": "nvmf_tgt_poll_group_001", 00:16:34.401 "admin_qpairs": 0, 00:16:34.401 "io_qpairs": 0, 00:16:34.401 "current_admin_qpairs": 0, 00:16:34.401 "current_io_qpairs": 0, 00:16:34.401 "pending_bdev_io": 0, 00:16:34.401 "completed_nvme_io": 0, 00:16:34.401 "transports": [] 00:16:34.401 }, 00:16:34.401 { 00:16:34.401 "name": "nvmf_tgt_poll_group_002", 00:16:34.401 "admin_qpairs": 0, 00:16:34.401 "io_qpairs": 0, 00:16:34.401 "current_admin_qpairs": 0, 00:16:34.401 "current_io_qpairs": 0, 00:16:34.401 "pending_bdev_io": 0, 00:16:34.401 "completed_nvme_io": 0, 00:16:34.401 "transports": [] 00:16:34.401 }, 00:16:34.401 { 00:16:34.401 "name": "nvmf_tgt_poll_group_003", 00:16:34.401 "admin_qpairs": 0, 00:16:34.401 "io_qpairs": 0, 00:16:34.401 "current_admin_qpairs": 0, 00:16:34.401 "current_io_qpairs": 0, 00:16:34.401 "pending_bdev_io": 0, 00:16:34.401 "completed_nvme_io": 0, 00:16:34.401 "transports": [] 00:16:34.401 } 00:16:34.401 ] 00:16:34.401 }' 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:34.401 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:34.663 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:34.663 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.663 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.663 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.663 [2024-11-20 06:26:54.710549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.663 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.663 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:34.663 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.663 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.663 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.663 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:34.663 "tick_rate": 2400000000, 00:16:34.663 "poll_groups": [ 00:16:34.663 { 00:16:34.663 "name": "nvmf_tgt_poll_group_000", 00:16:34.663 "admin_qpairs": 0, 00:16:34.663 "io_qpairs": 0, 00:16:34.663 "current_admin_qpairs": 0, 00:16:34.663 "current_io_qpairs": 0, 00:16:34.663 "pending_bdev_io": 0, 00:16:34.663 "completed_nvme_io": 0, 00:16:34.663 "transports": [ 00:16:34.663 { 00:16:34.663 "trtype": "TCP" 00:16:34.663 } 00:16:34.663 ] 00:16:34.663 }, 00:16:34.663 { 00:16:34.663 "name": "nvmf_tgt_poll_group_001", 00:16:34.663 "admin_qpairs": 0, 00:16:34.663 "io_qpairs": 0, 00:16:34.663 "current_admin_qpairs": 0, 00:16:34.663 "current_io_qpairs": 0, 00:16:34.663 "pending_bdev_io": 0, 00:16:34.663 "completed_nvme_io": 0, 00:16:34.663 "transports": [ 00:16:34.664 { 00:16:34.664 "trtype": "TCP" 00:16:34.664 } 00:16:34.664 ] 00:16:34.664 }, 00:16:34.664 { 00:16:34.664 "name": "nvmf_tgt_poll_group_002", 00:16:34.664 "admin_qpairs": 0, 00:16:34.664 "io_qpairs": 0, 00:16:34.664 "current_admin_qpairs": 0, 00:16:34.664 "current_io_qpairs": 0, 00:16:34.664 "pending_bdev_io": 0, 00:16:34.664 "completed_nvme_io": 0, 00:16:34.664 "transports": [ 00:16:34.664 { 00:16:34.664 "trtype": "TCP" 00:16:34.664 } 00:16:34.664 ] 00:16:34.664 }, 00:16:34.664 { 00:16:34.664 "name": "nvmf_tgt_poll_group_003", 00:16:34.664 "admin_qpairs": 0, 00:16:34.664 "io_qpairs": 0, 00:16:34.664 "current_admin_qpairs": 0, 00:16:34.664 "current_io_qpairs": 0, 00:16:34.664 "pending_bdev_io": 0, 00:16:34.664 "completed_nvme_io": 0, 00:16:34.664 "transports": [ 00:16:34.664 { 00:16:34.664 "trtype": "TCP" 00:16:34.664 } 00:16:34.664 ] 00:16:34.664 } 00:16:34.664 ] 00:16:34.664 }' 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.664 Malloc1 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.664 [2024-11-20 06:26:54.924257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:34.664 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:16:34.925 [2024-11-20 06:26:54.961287] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:16:34.925 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:34.925 could not add new controller: failed to write to nvme-fabrics device 00:16:34.925 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:34.925 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.925 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.925 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.925 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.925 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.925 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.925 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.925 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.310 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:36.310 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:36.310 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.310 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:36.310 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:38.221 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:38.221 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:38.221 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.481 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.482 [2024-11-20 06:26:58.695938] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:16:38.482 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:38.482 could not add new controller: failed to write to nvme-fabrics device 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.482 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.401 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.401 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:40.401 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.401 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:40.401 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.315 [2024-11-20 06:27:02.462948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.315 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.702 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.702 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:43.702 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.702 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:43.702 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:46.248 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:46.248 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:46.248 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.248 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:46.248 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.248 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:46.248 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.248 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.249 [2024-11-20 06:27:06.185924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.249 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.637 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.637 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:47.637 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.637 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:47.637 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:49.552 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:49.552 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:49.552 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.552 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:49.552 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.552 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:49.552 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.813 [2024-11-20 06:27:09.954135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.199 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.199 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:51.199 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.199 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:51.199 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.756 [2024-11-20 06:27:13.626676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.756 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.142 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.142 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:55.143 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.143 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:55.143 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.058 [2024-11-20 06:27:17.305785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.058 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.319 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.704 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:58.704 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:58.704 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.704 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:58.704 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:00.627 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:00.627 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:00.627 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.888 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:00.888 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.888 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:00.888 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.888 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.888 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:00.888 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:00.888 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 [2024-11-20 06:27:21.070053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 [2024-11-20 06:27:21.134210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.888 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.889 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.889 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.889 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.889 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.889 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.889 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.149 [2024-11-20 06:27:21.202378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.149 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 [2024-11-20 06:27:21.274619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 [2024-11-20 06:27:21.338819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:01.150 "tick_rate": 2400000000, 00:17:01.150 "poll_groups": [ 00:17:01.150 { 00:17:01.150 "name": "nvmf_tgt_poll_group_000", 00:17:01.150 "admin_qpairs": 0, 00:17:01.150 "io_qpairs": 224, 00:17:01.150 "current_admin_qpairs": 0, 00:17:01.150 "current_io_qpairs": 0, 00:17:01.150 "pending_bdev_io": 0, 00:17:01.150 "completed_nvme_io": 276, 00:17:01.150 "transports": [ 00:17:01.150 { 00:17:01.150 "trtype": "TCP" 00:17:01.150 } 00:17:01.150 ] 00:17:01.150 }, 00:17:01.150 { 00:17:01.150 "name": "nvmf_tgt_poll_group_001", 00:17:01.150 "admin_qpairs": 1, 00:17:01.150 "io_qpairs": 223, 00:17:01.150 "current_admin_qpairs": 0, 00:17:01.150 "current_io_qpairs": 0, 00:17:01.150 "pending_bdev_io": 0, 00:17:01.150 "completed_nvme_io": 223, 00:17:01.150 "transports": [ 00:17:01.150 { 00:17:01.150 "trtype": "TCP" 00:17:01.150 } 00:17:01.150 ] 00:17:01.150 }, 00:17:01.150 { 00:17:01.150 "name": "nvmf_tgt_poll_group_002", 00:17:01.150 "admin_qpairs": 6, 00:17:01.150 "io_qpairs": 218, 00:17:01.150 "current_admin_qpairs": 0, 00:17:01.150 "current_io_qpairs": 0, 00:17:01.150 "pending_bdev_io": 0, 00:17:01.150 "completed_nvme_io": 366, 00:17:01.150 "transports": [ 00:17:01.150 { 00:17:01.150 "trtype": "TCP" 00:17:01.150 } 00:17:01.150 ] 00:17:01.150 }, 00:17:01.150 { 00:17:01.150 "name": "nvmf_tgt_poll_group_003", 00:17:01.150 "admin_qpairs": 0, 00:17:01.150 "io_qpairs": 224, 00:17:01.150 "current_admin_qpairs": 0, 00:17:01.150 "current_io_qpairs": 0, 00:17:01.150 "pending_bdev_io": 0, 00:17:01.150 "completed_nvme_io": 374, 00:17:01.150 "transports": [ 00:17:01.150 { 00:17:01.150 "trtype": "TCP" 00:17:01.150 } 00:17:01.150 ] 00:17:01.150 } 00:17:01.150 ] 00:17:01.150 }' 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:01.150 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.411 rmmod nvme_tcp 00:17:01.411 rmmod nvme_fabrics 00:17:01.411 rmmod nvme_keyring 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2751156 ']' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2751156 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 2751156 ']' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 2751156 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2751156 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2751156' 00:17:01.411 killing process with pid 2751156 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 2751156 00:17:01.411 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 2751156 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.673 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.586 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.586 00:17:03.586 real 0m37.975s 00:17:03.586 user 1m53.481s 00:17:03.586 sys 0m7.961s 00:17:03.586 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:03.586 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.586 ************************************ 00:17:03.586 END TEST nvmf_rpc 00:17:03.586 ************************************ 00:17:03.847 06:27:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:03.847 06:27:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:03.847 06:27:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:03.847 06:27:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.847 ************************************ 00:17:03.847 START TEST nvmf_invalid 00:17:03.847 ************************************ 00:17:03.847 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:03.847 * Looking for test storage... 00:17:03.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:03.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.847 --rc genhtml_branch_coverage=1 00:17:03.847 --rc genhtml_function_coverage=1 00:17:03.847 --rc genhtml_legend=1 00:17:03.847 --rc geninfo_all_blocks=1 00:17:03.847 --rc geninfo_unexecuted_blocks=1 00:17:03.847 00:17:03.847 ' 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:03.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.847 --rc genhtml_branch_coverage=1 00:17:03.847 --rc genhtml_function_coverage=1 00:17:03.847 --rc genhtml_legend=1 00:17:03.847 --rc geninfo_all_blocks=1 00:17:03.847 --rc geninfo_unexecuted_blocks=1 00:17:03.847 00:17:03.847 ' 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:03.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.847 --rc genhtml_branch_coverage=1 00:17:03.847 --rc genhtml_function_coverage=1 00:17:03.847 --rc genhtml_legend=1 00:17:03.847 --rc geninfo_all_blocks=1 00:17:03.847 --rc geninfo_unexecuted_blocks=1 00:17:03.847 00:17:03.847 ' 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:03.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.847 --rc genhtml_branch_coverage=1 00:17:03.847 --rc genhtml_function_coverage=1 00:17:03.847 --rc genhtml_legend=1 00:17:03.847 --rc geninfo_all_blocks=1 00:17:03.847 --rc geninfo_unexecuted_blocks=1 00:17:03.847 00:17:03.847 ' 00:17:03.847 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:04.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:04.110 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:12.255 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:12.255 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:12.255 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:12.255 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:12.255 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:12.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:17:12.256 00:17:12.256 --- 10.0.0.2 ping statistics --- 00:17:12.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.256 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:17:12.256 00:17:12.256 --- 10.0.0.1 ping statistics --- 00:17:12.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.256 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2761433 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2761433 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 2761433 ']' 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:12.256 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.256 [2024-11-20 06:27:31.668053] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:17:12.256 [2024-11-20 06:27:31.668125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.256 [2024-11-20 06:27:31.768466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.256 [2024-11-20 06:27:31.822050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.256 [2024-11-20 06:27:31.822105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.256 [2024-11-20 06:27:31.822114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.256 [2024-11-20 06:27:31.822121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.256 [2024-11-20 06:27:31.822128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.256 [2024-11-20 06:27:31.824327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.256 [2024-11-20 06:27:31.824627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.256 [2024-11-20 06:27:31.824789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.256 [2024-11-20 06:27:31.824791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.256 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.256 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:17:12.256 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.256 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:12.256 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.517 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.517 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:12.517 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27895 00:17:12.517 [2024-11-20 06:27:32.707211] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:12.518 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:12.518 { 00:17:12.518 "nqn": "nqn.2016-06.io.spdk:cnode27895", 00:17:12.518 "tgt_name": "foobar", 00:17:12.518 "method": "nvmf_create_subsystem", 00:17:12.518 "req_id": 1 00:17:12.518 } 00:17:12.518 Got JSON-RPC error response 00:17:12.518 response: 00:17:12.518 { 00:17:12.518 "code": -32603, 00:17:12.518 "message": "Unable to find target foobar" 00:17:12.518 }' 00:17:12.518 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:12.518 { 00:17:12.518 "nqn": "nqn.2016-06.io.spdk:cnode27895", 00:17:12.518 "tgt_name": "foobar", 00:17:12.518 "method": "nvmf_create_subsystem", 00:17:12.518 "req_id": 1 00:17:12.518 } 00:17:12.518 Got JSON-RPC error response 00:17:12.518 response: 00:17:12.518 { 00:17:12.518 "code": -32603, 00:17:12.518 "message": "Unable to find target foobar" 00:17:12.518 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:12.518 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:12.518 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15873 00:17:12.778 [2024-11-20 06:27:32.912043] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15873: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:12.778 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:12.778 { 00:17:12.778 "nqn": "nqn.2016-06.io.spdk:cnode15873", 00:17:12.778 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:12.778 "method": "nvmf_create_subsystem", 00:17:12.778 "req_id": 1 00:17:12.778 } 00:17:12.778 Got JSON-RPC error response 00:17:12.778 response: 00:17:12.778 { 00:17:12.778 "code": -32602, 00:17:12.778 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:12.778 }' 00:17:12.778 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:12.778 { 00:17:12.778 "nqn": "nqn.2016-06.io.spdk:cnode15873", 00:17:12.778 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:12.778 "method": "nvmf_create_subsystem", 00:17:12.778 "req_id": 1 00:17:12.778 } 00:17:12.778 Got JSON-RPC error response 00:17:12.778 response: 00:17:12.779 { 00:17:12.779 "code": -32602, 00:17:12.779 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:12.779 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:12.779 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:12.779 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14898 00:17:13.041 [2024-11-20 06:27:33.116765] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14898: invalid model number 'SPDK_Controller' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:13.041 { 00:17:13.041 "nqn": "nqn.2016-06.io.spdk:cnode14898", 00:17:13.041 "model_number": "SPDK_Controller\u001f", 00:17:13.041 "method": "nvmf_create_subsystem", 00:17:13.041 "req_id": 1 00:17:13.041 } 00:17:13.041 Got JSON-RPC error response 00:17:13.041 response: 00:17:13.041 { 00:17:13.041 "code": -32602, 00:17:13.041 "message": "Invalid MN SPDK_Controller\u001f" 00:17:13.041 }' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:13.041 { 00:17:13.041 "nqn": "nqn.2016-06.io.spdk:cnode14898", 00:17:13.041 "model_number": "SPDK_Controller\u001f", 00:17:13.041 "method": "nvmf_create_subsystem", 00:17:13.041 "req_id": 1 00:17:13.041 } 00:17:13.041 Got JSON-RPC error response 00:17:13.041 response: 00:17:13.041 { 00:17:13.041 "code": -32602, 00:17:13.041 "message": "Invalid MN SPDK_Controller\u001f" 00:17:13.041 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.041 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.042 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:13.042 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:13.042 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:13.042 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.042 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.042 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:13.042 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\B*Lgg*g@"]HGJ:|z=-bL' 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\B*Lgg*g@"]HGJ:|z=-bL' nqn.2016-06.io.spdk:cnode8793 00:17:13.303 [2024-11-20 06:27:33.494176] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8793: invalid serial number '\B*Lgg*g@"]HGJ:|z=-bL' 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:13.303 { 00:17:13.303 "nqn": "nqn.2016-06.io.spdk:cnode8793", 00:17:13.303 "serial_number": "\\B*Lgg*g@\"]HGJ:|z=-bL", 00:17:13.303 "method": "nvmf_create_subsystem", 00:17:13.303 "req_id": 1 00:17:13.303 } 00:17:13.303 Got JSON-RPC error response 00:17:13.303 response: 00:17:13.303 { 00:17:13.303 "code": -32602, 00:17:13.303 "message": "Invalid SN \\B*Lgg*g@\"]HGJ:|z=-bL" 00:17:13.303 }' 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:13.303 { 00:17:13.303 "nqn": "nqn.2016-06.io.spdk:cnode8793", 00:17:13.303 "serial_number": "\\B*Lgg*g@\"]HGJ:|z=-bL", 00:17:13.303 "method": "nvmf_create_subsystem", 00:17:13.303 "req_id": 1 00:17:13.303 } 00:17:13.303 Got JSON-RPC error response 00:17:13.303 response: 00:17:13.303 { 00:17:13.303 "code": -32602, 00:17:13.303 "message": "Invalid SN \\B*Lgg*g@\"]HGJ:|z=-bL" 00:17:13.303 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:13.303 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.304 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:13.566 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.567 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:13.834 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:13.834 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:13.834 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.834 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.834 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:13.834 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:13.834 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:13.834 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.834 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ',+0LR@?JWvE*>O~-\Um}2#p7Pfr?RQTBm%6V!*GUg' 00:17:13.835 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ',+0LR@?JWvE*>O~-\Um}2#p7Pfr?RQTBm%6V!*GUg' nqn.2016-06.io.spdk:cnode29762 00:17:13.835 [2024-11-20 06:27:34.036324] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29762: invalid model number ',+0LR@?JWvE*>O~-\Um}2#p7Pfr?RQTBm%6V!*GUg' 00:17:13.835 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:13.835 { 00:17:13.835 "nqn": "nqn.2016-06.io.spdk:cnode29762", 00:17:13.835 "model_number": ",+0LR@?JWvE*>O~-\\Um}2#p7Pfr?RQTBm%6V!*GUg", 00:17:13.835 "method": "nvmf_create_subsystem", 00:17:13.835 "req_id": 1 00:17:13.835 } 00:17:13.835 Got JSON-RPC error response 00:17:13.835 response: 00:17:13.835 { 00:17:13.835 "code": -32602, 00:17:13.835 "message": "Invalid MN ,+0LR@?JWvE*>O~-\\Um}2#p7Pfr?RQTBm%6V!*GUg" 00:17:13.835 }' 00:17:13.835 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:13.835 { 00:17:13.835 "nqn": "nqn.2016-06.io.spdk:cnode29762", 00:17:13.835 "model_number": ",+0LR@?JWvE*>O~-\\Um}2#p7Pfr?RQTBm%6V!*GUg", 00:17:13.835 "method": "nvmf_create_subsystem", 00:17:13.835 "req_id": 1 00:17:13.835 } 00:17:13.835 Got JSON-RPC error response 00:17:13.835 response: 00:17:13.835 { 00:17:13.835 "code": -32602, 00:17:13.835 "message": "Invalid MN ,+0LR@?JWvE*>O~-\\Um}2#p7Pfr?RQTBm%6V!*GUg" 00:17:13.835 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:13.835 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:14.170 [2024-11-20 06:27:34.241202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.170 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:14.455 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:14.455 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:14.455 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:14.455 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:14.455 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:14.455 [2024-11-20 06:27:34.638534] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:14.455 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:14.455 { 00:17:14.455 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:14.455 "listen_address": { 00:17:14.455 "trtype": "tcp", 00:17:14.455 "traddr": "", 00:17:14.455 "trsvcid": "4421" 00:17:14.455 }, 00:17:14.455 "method": "nvmf_subsystem_remove_listener", 00:17:14.455 "req_id": 1 00:17:14.455 } 00:17:14.455 Got JSON-RPC error response 00:17:14.455 response: 00:17:14.455 { 00:17:14.455 "code": -32602, 00:17:14.455 "message": "Invalid parameters" 00:17:14.455 }' 00:17:14.455 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:14.455 { 00:17:14.455 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:14.455 "listen_address": { 00:17:14.455 "trtype": "tcp", 00:17:14.455 "traddr": "", 00:17:14.455 "trsvcid": "4421" 00:17:14.455 }, 00:17:14.455 "method": "nvmf_subsystem_remove_listener", 00:17:14.455 "req_id": 1 00:17:14.455 } 00:17:14.455 Got JSON-RPC error response 00:17:14.455 response: 00:17:14.455 { 00:17:14.455 "code": -32602, 00:17:14.455 "message": "Invalid parameters" 00:17:14.455 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:14.455 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8511 -i 0 00:17:14.721 [2024-11-20 06:27:34.819059] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8511: invalid cntlid range [0-65519] 00:17:14.721 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:14.721 { 00:17:14.721 "nqn": "nqn.2016-06.io.spdk:cnode8511", 00:17:14.721 "min_cntlid": 0, 00:17:14.721 "method": "nvmf_create_subsystem", 00:17:14.721 "req_id": 1 00:17:14.721 } 00:17:14.721 Got JSON-RPC error response 00:17:14.721 response: 00:17:14.721 { 00:17:14.721 "code": -32602, 00:17:14.721 "message": "Invalid cntlid range [0-65519]" 00:17:14.721 }' 00:17:14.721 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:14.721 { 00:17:14.721 "nqn": "nqn.2016-06.io.spdk:cnode8511", 00:17:14.721 "min_cntlid": 0, 00:17:14.721 "method": "nvmf_create_subsystem", 00:17:14.721 "req_id": 1 00:17:14.721 } 00:17:14.721 Got JSON-RPC error response 00:17:14.721 response: 00:17:14.721 { 00:17:14.721 "code": -32602, 00:17:14.721 "message": "Invalid cntlid range [0-65519]" 00:17:14.721 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:14.721 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28914 -i 65520 00:17:14.982 [2024-11-20 06:27:35.007638] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28914: invalid cntlid range [65520-65519] 00:17:14.982 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:14.982 { 00:17:14.982 "nqn": "nqn.2016-06.io.spdk:cnode28914", 00:17:14.982 "min_cntlid": 65520, 00:17:14.982 "method": "nvmf_create_subsystem", 00:17:14.982 "req_id": 1 00:17:14.982 } 00:17:14.982 Got JSON-RPC error response 00:17:14.982 response: 00:17:14.982 { 00:17:14.982 "code": -32602, 00:17:14.982 "message": "Invalid cntlid range [65520-65519]" 00:17:14.982 }' 00:17:14.982 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:14.982 { 00:17:14.982 "nqn": "nqn.2016-06.io.spdk:cnode28914", 00:17:14.982 "min_cntlid": 65520, 00:17:14.982 "method": "nvmf_create_subsystem", 00:17:14.982 "req_id": 1 00:17:14.982 } 00:17:14.982 Got JSON-RPC error response 00:17:14.982 response: 00:17:14.982 { 00:17:14.982 "code": -32602, 00:17:14.982 "message": "Invalid cntlid range [65520-65519]" 00:17:14.982 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:14.982 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17312 -I 0 00:17:14.982 [2024-11-20 06:27:35.192214] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17312: invalid cntlid range [1-0] 00:17:14.982 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:14.982 { 00:17:14.982 "nqn": "nqn.2016-06.io.spdk:cnode17312", 00:17:14.982 "max_cntlid": 0, 00:17:14.982 "method": "nvmf_create_subsystem", 00:17:14.982 "req_id": 1 00:17:14.982 } 00:17:14.982 Got JSON-RPC error response 00:17:14.982 response: 00:17:14.982 { 00:17:14.982 "code": -32602, 00:17:14.982 "message": "Invalid cntlid range [1-0]" 00:17:14.982 }' 00:17:14.982 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:14.982 { 00:17:14.982 "nqn": "nqn.2016-06.io.spdk:cnode17312", 00:17:14.982 "max_cntlid": 0, 00:17:14.982 "method": "nvmf_create_subsystem", 00:17:14.982 "req_id": 1 00:17:14.982 } 00:17:14.982 Got JSON-RPC error response 00:17:14.982 response: 00:17:14.982 { 00:17:14.982 "code": -32602, 00:17:14.982 "message": "Invalid cntlid range [1-0]" 00:17:14.982 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:14.982 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15289 -I 65520 00:17:15.243 [2024-11-20 06:27:35.380780] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15289: invalid cntlid range [1-65520] 00:17:15.243 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:15.243 { 00:17:15.243 "nqn": "nqn.2016-06.io.spdk:cnode15289", 00:17:15.243 "max_cntlid": 65520, 00:17:15.243 "method": "nvmf_create_subsystem", 00:17:15.243 "req_id": 1 00:17:15.243 } 00:17:15.243 Got JSON-RPC error response 00:17:15.243 response: 00:17:15.243 { 00:17:15.243 "code": -32602, 00:17:15.243 "message": "Invalid cntlid range [1-65520]" 00:17:15.243 }' 00:17:15.243 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:15.243 { 00:17:15.243 "nqn": "nqn.2016-06.io.spdk:cnode15289", 00:17:15.243 "max_cntlid": 65520, 00:17:15.243 "method": "nvmf_create_subsystem", 00:17:15.243 "req_id": 1 00:17:15.243 } 00:17:15.243 Got JSON-RPC error response 00:17:15.243 response: 00:17:15.243 { 00:17:15.243 "code": -32602, 00:17:15.243 "message": "Invalid cntlid range [1-65520]" 00:17:15.243 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:15.243 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8345 -i 6 -I 5 00:17:15.504 [2024-11-20 06:27:35.569360] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8345: invalid cntlid range [6-5] 00:17:15.504 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:15.504 { 00:17:15.504 "nqn": "nqn.2016-06.io.spdk:cnode8345", 00:17:15.504 "min_cntlid": 6, 00:17:15.504 "max_cntlid": 5, 00:17:15.504 "method": "nvmf_create_subsystem", 00:17:15.504 "req_id": 1 00:17:15.504 } 00:17:15.505 Got JSON-RPC error response 00:17:15.505 response: 00:17:15.505 { 00:17:15.505 "code": -32602, 00:17:15.505 "message": "Invalid cntlid range [6-5]" 00:17:15.505 }' 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:15.505 { 00:17:15.505 "nqn": "nqn.2016-06.io.spdk:cnode8345", 00:17:15.505 "min_cntlid": 6, 00:17:15.505 "max_cntlid": 5, 00:17:15.505 "method": "nvmf_create_subsystem", 00:17:15.505 "req_id": 1 00:17:15.505 } 00:17:15.505 Got JSON-RPC error response 00:17:15.505 response: 00:17:15.505 { 00:17:15.505 "code": -32602, 00:17:15.505 "message": "Invalid cntlid range [6-5]" 00:17:15.505 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:15.505 { 00:17:15.505 "name": "foobar", 00:17:15.505 "method": "nvmf_delete_target", 00:17:15.505 "req_id": 1 00:17:15.505 } 00:17:15.505 Got JSON-RPC error response 00:17:15.505 response: 00:17:15.505 { 00:17:15.505 "code": -32602, 00:17:15.505 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:15.505 }' 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:15.505 { 00:17:15.505 "name": "foobar", 00:17:15.505 "method": "nvmf_delete_target", 00:17:15.505 "req_id": 1 00:17:15.505 } 00:17:15.505 Got JSON-RPC error response 00:17:15.505 response: 00:17:15.505 { 00:17:15.505 "code": -32602, 00:17:15.505 "message": "The specified target doesn't exist, cannot delete it." 00:17:15.505 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:15.505 rmmod nvme_tcp 00:17:15.505 rmmod nvme_fabrics 00:17:15.505 rmmod nvme_keyring 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2761433 ']' 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2761433 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 2761433 ']' 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 2761433 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:17:15.505 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2761433 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2761433' 00:17:15.767 killing process with pid 2761433 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 2761433 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 2761433 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:15.767 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.312 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:18.312 00:17:18.312 real 0m14.110s 00:17:18.312 user 0m21.059s 00:17:18.312 sys 0m6.753s 00:17:18.312 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:18.312 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:18.312 ************************************ 00:17:18.312 END TEST nvmf_invalid 00:17:18.312 ************************************ 00:17:18.312 06:27:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:18.312 06:27:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:18.313 ************************************ 00:17:18.313 START TEST nvmf_connect_stress 00:17:18.313 ************************************ 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:18.313 * Looking for test storage... 00:17:18.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:18.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.313 --rc genhtml_branch_coverage=1 00:17:18.313 --rc genhtml_function_coverage=1 00:17:18.313 --rc genhtml_legend=1 00:17:18.313 --rc geninfo_all_blocks=1 00:17:18.313 --rc geninfo_unexecuted_blocks=1 00:17:18.313 00:17:18.313 ' 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:18.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.313 --rc genhtml_branch_coverage=1 00:17:18.313 --rc genhtml_function_coverage=1 00:17:18.313 --rc genhtml_legend=1 00:17:18.313 --rc geninfo_all_blocks=1 00:17:18.313 --rc geninfo_unexecuted_blocks=1 00:17:18.313 00:17:18.313 ' 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:18.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.313 --rc genhtml_branch_coverage=1 00:17:18.313 --rc genhtml_function_coverage=1 00:17:18.313 --rc genhtml_legend=1 00:17:18.313 --rc geninfo_all_blocks=1 00:17:18.313 --rc geninfo_unexecuted_blocks=1 00:17:18.313 00:17:18.313 ' 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:18.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.313 --rc genhtml_branch_coverage=1 00:17:18.313 --rc genhtml_function_coverage=1 00:17:18.313 --rc genhtml_legend=1 00:17:18.313 --rc geninfo_all_blocks=1 00:17:18.313 --rc geninfo_unexecuted_blocks=1 00:17:18.313 00:17:18.313 ' 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.313 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:18.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:18.314 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:26.474 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:26.474 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:26.474 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:26.474 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:26.474 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:26.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:17:26.475 00:17:26.475 --- 10.0.0.2 ping statistics --- 00:17:26.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.475 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:17:26.475 00:17:26.475 --- 10.0.0.1 ping statistics --- 00:17:26.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.475 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2766615 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2766615 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 2766615 ']' 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:26.475 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.475 [2024-11-20 06:27:45.932699] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:17:26.475 [2024-11-20 06:27:45.932766] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.475 [2024-11-20 06:27:46.031234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:26.475 [2024-11-20 06:27:46.082323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.475 [2024-11-20 06:27:46.082373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.475 [2024-11-20 06:27:46.082385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.475 [2024-11-20 06:27:46.082395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.475 [2024-11-20 06:27:46.082404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.475 [2024-11-20 06:27:46.084531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.475 [2024-11-20 06:27:46.084693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.475 [2024-11-20 06:27:46.084695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.737 [2024-11-20 06:27:46.805590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.737 [2024-11-20 06:27:46.831232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.737 NULL1 00:17:26.737 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2766798 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.738 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.309 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.309 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:27.309 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.310 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.310 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.571 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.571 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:27.571 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.571 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.571 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.832 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.832 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:27.832 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.832 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.832 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.091 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.091 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:28.091 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.091 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.091 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.352 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.352 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:28.352 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.352 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.352 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.922 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.922 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:28.922 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.922 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.922 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.183 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.183 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:29.183 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.183 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.183 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.444 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.444 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:29.444 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.444 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.444 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.704 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.704 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:29.704 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.704 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.704 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.965 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.965 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:29.965 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.965 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.965 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.536 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.536 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:30.536 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.536 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.536 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.795 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.795 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:30.795 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.795 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.795 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.055 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.055 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:31.055 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.055 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.055 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.315 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.315 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:31.315 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.315 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.315 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.575 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.575 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:31.575 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.575 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.575 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.145 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.145 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:32.145 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.145 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.145 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.405 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.405 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:32.406 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.406 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.406 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.666 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.666 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:32.666 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.666 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.666 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.927 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.927 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:32.927 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.927 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.927 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.188 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.188 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:33.188 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.188 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.188 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.757 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.758 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:33.758 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.758 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.758 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.017 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.017 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:34.017 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.017 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.017 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.277 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.277 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:34.277 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.277 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.277 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.537 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.537 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:34.537 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.537 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.537 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.106 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.106 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:35.106 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.106 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.106 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.365 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.365 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:35.365 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.365 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.365 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.624 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.624 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:35.624 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.624 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.624 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.884 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.884 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:35.884 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.884 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.884 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.145 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.145 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:36.145 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.145 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.145 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.716 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.716 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:36.716 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.716 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.716 06:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.976 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:36.976 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.976 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2766798 00:17:36.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2766798) - No such process 00:17:36.976 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2766798 00:17:36.976 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:36.976 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:36.976 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:36.976 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.976 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:36.976 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.977 rmmod nvme_tcp 00:17:36.977 rmmod nvme_fabrics 00:17:36.977 rmmod nvme_keyring 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2766615 ']' 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2766615 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 2766615 ']' 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 2766615 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2766615 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2766615' 00:17:36.977 killing process with pid 2766615 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 2766615 00:17:36.977 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 2766615 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.237 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.159 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:39.159 00:17:39.159 real 0m21.254s 00:17:39.159 user 0m42.152s 00:17:39.159 sys 0m9.394s 00:17:39.159 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:39.159 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.159 ************************************ 00:17:39.159 END TEST nvmf_connect_stress 00:17:39.159 ************************************ 00:17:39.159 06:27:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:39.159 06:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:39.159 06:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:39.159 06:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.425 ************************************ 00:17:39.425 START TEST nvmf_fused_ordering 00:17:39.425 ************************************ 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:39.425 * Looking for test storage... 00:17:39.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:39.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.425 --rc genhtml_branch_coverage=1 00:17:39.425 --rc genhtml_function_coverage=1 00:17:39.425 --rc genhtml_legend=1 00:17:39.425 --rc geninfo_all_blocks=1 00:17:39.425 --rc geninfo_unexecuted_blocks=1 00:17:39.425 00:17:39.425 ' 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:39.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.425 --rc genhtml_branch_coverage=1 00:17:39.425 --rc genhtml_function_coverage=1 00:17:39.425 --rc genhtml_legend=1 00:17:39.425 --rc geninfo_all_blocks=1 00:17:39.425 --rc geninfo_unexecuted_blocks=1 00:17:39.425 00:17:39.425 ' 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:39.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.425 --rc genhtml_branch_coverage=1 00:17:39.425 --rc genhtml_function_coverage=1 00:17:39.425 --rc genhtml_legend=1 00:17:39.425 --rc geninfo_all_blocks=1 00:17:39.425 --rc geninfo_unexecuted_blocks=1 00:17:39.425 00:17:39.425 ' 00:17:39.425 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:39.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.425 --rc genhtml_branch_coverage=1 00:17:39.425 --rc genhtml_function_coverage=1 00:17:39.425 --rc genhtml_legend=1 00:17:39.425 --rc geninfo_all_blocks=1 00:17:39.425 --rc geninfo_unexecuted_blocks=1 00:17:39.425 00:17:39.425 ' 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:39.426 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:47.571 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:47.571 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:47.571 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:47.571 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.571 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:47.572 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:47.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:17:47.572 00:17:47.572 --- 10.0.0.2 ping statistics --- 00:17:47.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.572 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:17:47.572 00:17:47.572 --- 10.0.0.1 ping statistics --- 00:17:47.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.572 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2773047 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2773047 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 2773047 ']' 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:47.572 06:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.572 [2024-11-20 06:28:07.215523] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:17:47.572 [2024-11-20 06:28:07.215587] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.572 [2024-11-20 06:28:07.314416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.572 [2024-11-20 06:28:07.364763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.572 [2024-11-20 06:28:07.364818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.572 [2024-11-20 06:28:07.364829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.572 [2024-11-20 06:28:07.364839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.572 [2024-11-20 06:28:07.364847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.572 [2024-11-20 06:28:07.365711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.834 [2024-11-20 06:28:08.077098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.834 [2024-11-20 06:28:08.101362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.834 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.096 NULL1 00:17:48.096 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.096 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:48.096 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.096 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.096 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.096 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:48.096 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.096 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.097 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.097 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:48.097 [2024-11-20 06:28:08.171436] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:17:48.097 [2024-11-20 06:28:08.171479] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2773188 ] 00:17:48.358 Attached to nqn.2016-06.io.spdk:cnode1 00:17:48.358 Namespace ID: 1 size: 1GB 00:17:48.358 fused_ordering(0) 00:17:48.358 fused_ordering(1) 00:17:48.358 fused_ordering(2) 00:17:48.358 fused_ordering(3) 00:17:48.358 fused_ordering(4) 00:17:48.358 fused_ordering(5) 00:17:48.358 fused_ordering(6) 00:17:48.358 fused_ordering(7) 00:17:48.358 fused_ordering(8) 00:17:48.358 fused_ordering(9) 00:17:48.358 fused_ordering(10) 00:17:48.358 fused_ordering(11) 00:17:48.358 fused_ordering(12) 00:17:48.358 fused_ordering(13) 00:17:48.358 fused_ordering(14) 00:17:48.358 fused_ordering(15) 00:17:48.358 fused_ordering(16) 00:17:48.358 fused_ordering(17) 00:17:48.358 fused_ordering(18) 00:17:48.358 fused_ordering(19) 00:17:48.358 fused_ordering(20) 00:17:48.358 fused_ordering(21) 00:17:48.358 fused_ordering(22) 00:17:48.358 fused_ordering(23) 00:17:48.358 fused_ordering(24) 00:17:48.358 fused_ordering(25) 00:17:48.358 fused_ordering(26) 00:17:48.358 fused_ordering(27) 00:17:48.358 fused_ordering(28) 00:17:48.358 fused_ordering(29) 00:17:48.358 fused_ordering(30) 00:17:48.358 fused_ordering(31) 00:17:48.358 fused_ordering(32) 00:17:48.358 fused_ordering(33) 00:17:48.358 fused_ordering(34) 00:17:48.358 fused_ordering(35) 00:17:48.358 fused_ordering(36) 00:17:48.358 fused_ordering(37) 00:17:48.358 fused_ordering(38) 00:17:48.358 fused_ordering(39) 00:17:48.358 fused_ordering(40) 00:17:48.358 fused_ordering(41) 00:17:48.358 fused_ordering(42) 00:17:48.358 fused_ordering(43) 00:17:48.358 fused_ordering(44) 00:17:48.358 fused_ordering(45) 00:17:48.358 fused_ordering(46) 00:17:48.358 fused_ordering(47) 00:17:48.358 fused_ordering(48) 00:17:48.358 fused_ordering(49) 00:17:48.358 fused_ordering(50) 00:17:48.358 fused_ordering(51) 00:17:48.358 fused_ordering(52) 00:17:48.358 fused_ordering(53) 00:17:48.358 fused_ordering(54) 00:17:48.358 fused_ordering(55) 00:17:48.358 fused_ordering(56) 00:17:48.358 fused_ordering(57) 00:17:48.358 fused_ordering(58) 00:17:48.358 fused_ordering(59) 00:17:48.358 fused_ordering(60) 00:17:48.358 fused_ordering(61) 00:17:48.358 fused_ordering(62) 00:17:48.358 fused_ordering(63) 00:17:48.358 fused_ordering(64) 00:17:48.358 fused_ordering(65) 00:17:48.358 fused_ordering(66) 00:17:48.358 fused_ordering(67) 00:17:48.358 fused_ordering(68) 00:17:48.358 fused_ordering(69) 00:17:48.358 fused_ordering(70) 00:17:48.358 fused_ordering(71) 00:17:48.358 fused_ordering(72) 00:17:48.358 fused_ordering(73) 00:17:48.358 fused_ordering(74) 00:17:48.358 fused_ordering(75) 00:17:48.358 fused_ordering(76) 00:17:48.358 fused_ordering(77) 00:17:48.358 fused_ordering(78) 00:17:48.358 fused_ordering(79) 00:17:48.358 fused_ordering(80) 00:17:48.358 fused_ordering(81) 00:17:48.358 fused_ordering(82) 00:17:48.358 fused_ordering(83) 00:17:48.358 fused_ordering(84) 00:17:48.358 fused_ordering(85) 00:17:48.358 fused_ordering(86) 00:17:48.358 fused_ordering(87) 00:17:48.358 fused_ordering(88) 00:17:48.358 fused_ordering(89) 00:17:48.358 fused_ordering(90) 00:17:48.358 fused_ordering(91) 00:17:48.358 fused_ordering(92) 00:17:48.358 fused_ordering(93) 00:17:48.358 fused_ordering(94) 00:17:48.358 fused_ordering(95) 00:17:48.358 fused_ordering(96) 00:17:48.358 fused_ordering(97) 00:17:48.358 fused_ordering(98) 00:17:48.358 fused_ordering(99) 00:17:48.358 fused_ordering(100) 00:17:48.358 fused_ordering(101) 00:17:48.358 fused_ordering(102) 00:17:48.358 fused_ordering(103) 00:17:48.358 fused_ordering(104) 00:17:48.358 fused_ordering(105) 00:17:48.358 fused_ordering(106) 00:17:48.358 fused_ordering(107) 00:17:48.358 fused_ordering(108) 00:17:48.358 fused_ordering(109) 00:17:48.358 fused_ordering(110) 00:17:48.358 fused_ordering(111) 00:17:48.358 fused_ordering(112) 00:17:48.358 fused_ordering(113) 00:17:48.358 fused_ordering(114) 00:17:48.358 fused_ordering(115) 00:17:48.358 fused_ordering(116) 00:17:48.358 fused_ordering(117) 00:17:48.358 fused_ordering(118) 00:17:48.358 fused_ordering(119) 00:17:48.358 fused_ordering(120) 00:17:48.358 fused_ordering(121) 00:17:48.358 fused_ordering(122) 00:17:48.358 fused_ordering(123) 00:17:48.358 fused_ordering(124) 00:17:48.358 fused_ordering(125) 00:17:48.358 fused_ordering(126) 00:17:48.358 fused_ordering(127) 00:17:48.358 fused_ordering(128) 00:17:48.358 fused_ordering(129) 00:17:48.358 fused_ordering(130) 00:17:48.358 fused_ordering(131) 00:17:48.358 fused_ordering(132) 00:17:48.358 fused_ordering(133) 00:17:48.359 fused_ordering(134) 00:17:48.359 fused_ordering(135) 00:17:48.359 fused_ordering(136) 00:17:48.359 fused_ordering(137) 00:17:48.359 fused_ordering(138) 00:17:48.359 fused_ordering(139) 00:17:48.359 fused_ordering(140) 00:17:48.359 fused_ordering(141) 00:17:48.359 fused_ordering(142) 00:17:48.359 fused_ordering(143) 00:17:48.359 fused_ordering(144) 00:17:48.359 fused_ordering(145) 00:17:48.359 fused_ordering(146) 00:17:48.359 fused_ordering(147) 00:17:48.359 fused_ordering(148) 00:17:48.359 fused_ordering(149) 00:17:48.359 fused_ordering(150) 00:17:48.359 fused_ordering(151) 00:17:48.359 fused_ordering(152) 00:17:48.359 fused_ordering(153) 00:17:48.359 fused_ordering(154) 00:17:48.359 fused_ordering(155) 00:17:48.359 fused_ordering(156) 00:17:48.359 fused_ordering(157) 00:17:48.359 fused_ordering(158) 00:17:48.359 fused_ordering(159) 00:17:48.359 fused_ordering(160) 00:17:48.359 fused_ordering(161) 00:17:48.359 fused_ordering(162) 00:17:48.359 fused_ordering(163) 00:17:48.359 fused_ordering(164) 00:17:48.359 fused_ordering(165) 00:17:48.359 fused_ordering(166) 00:17:48.359 fused_ordering(167) 00:17:48.359 fused_ordering(168) 00:17:48.359 fused_ordering(169) 00:17:48.359 fused_ordering(170) 00:17:48.359 fused_ordering(171) 00:17:48.359 fused_ordering(172) 00:17:48.359 fused_ordering(173) 00:17:48.359 fused_ordering(174) 00:17:48.359 fused_ordering(175) 00:17:48.359 fused_ordering(176) 00:17:48.359 fused_ordering(177) 00:17:48.359 fused_ordering(178) 00:17:48.359 fused_ordering(179) 00:17:48.359 fused_ordering(180) 00:17:48.359 fused_ordering(181) 00:17:48.359 fused_ordering(182) 00:17:48.359 fused_ordering(183) 00:17:48.359 fused_ordering(184) 00:17:48.359 fused_ordering(185) 00:17:48.359 fused_ordering(186) 00:17:48.359 fused_ordering(187) 00:17:48.359 fused_ordering(188) 00:17:48.359 fused_ordering(189) 00:17:48.359 fused_ordering(190) 00:17:48.359 fused_ordering(191) 00:17:48.359 fused_ordering(192) 00:17:48.359 fused_ordering(193) 00:17:48.359 fused_ordering(194) 00:17:48.359 fused_ordering(195) 00:17:48.359 fused_ordering(196) 00:17:48.359 fused_ordering(197) 00:17:48.359 fused_ordering(198) 00:17:48.359 fused_ordering(199) 00:17:48.359 fused_ordering(200) 00:17:48.359 fused_ordering(201) 00:17:48.359 fused_ordering(202) 00:17:48.359 fused_ordering(203) 00:17:48.359 fused_ordering(204) 00:17:48.359 fused_ordering(205) 00:17:48.931 fused_ordering(206) 00:17:48.931 fused_ordering(207) 00:17:48.931 fused_ordering(208) 00:17:48.931 fused_ordering(209) 00:17:48.931 fused_ordering(210) 00:17:48.931 fused_ordering(211) 00:17:48.931 fused_ordering(212) 00:17:48.931 fused_ordering(213) 00:17:48.931 fused_ordering(214) 00:17:48.931 fused_ordering(215) 00:17:48.931 fused_ordering(216) 00:17:48.931 fused_ordering(217) 00:17:48.931 fused_ordering(218) 00:17:48.931 fused_ordering(219) 00:17:48.931 fused_ordering(220) 00:17:48.931 fused_ordering(221) 00:17:48.931 fused_ordering(222) 00:17:48.931 fused_ordering(223) 00:17:48.931 fused_ordering(224) 00:17:48.931 fused_ordering(225) 00:17:48.931 fused_ordering(226) 00:17:48.931 fused_ordering(227) 00:17:48.931 fused_ordering(228) 00:17:48.931 fused_ordering(229) 00:17:48.931 fused_ordering(230) 00:17:48.931 fused_ordering(231) 00:17:48.931 fused_ordering(232) 00:17:48.931 fused_ordering(233) 00:17:48.931 fused_ordering(234) 00:17:48.931 fused_ordering(235) 00:17:48.931 fused_ordering(236) 00:17:48.931 fused_ordering(237) 00:17:48.931 fused_ordering(238) 00:17:48.931 fused_ordering(239) 00:17:48.931 fused_ordering(240) 00:17:48.931 fused_ordering(241) 00:17:48.931 fused_ordering(242) 00:17:48.931 fused_ordering(243) 00:17:48.931 fused_ordering(244) 00:17:48.931 fused_ordering(245) 00:17:48.931 fused_ordering(246) 00:17:48.931 fused_ordering(247) 00:17:48.931 fused_ordering(248) 00:17:48.931 fused_ordering(249) 00:17:48.931 fused_ordering(250) 00:17:48.931 fused_ordering(251) 00:17:48.931 fused_ordering(252) 00:17:48.931 fused_ordering(253) 00:17:48.931 fused_ordering(254) 00:17:48.931 fused_ordering(255) 00:17:48.931 fused_ordering(256) 00:17:48.931 fused_ordering(257) 00:17:48.931 fused_ordering(258) 00:17:48.932 fused_ordering(259) 00:17:48.932 fused_ordering(260) 00:17:48.932 fused_ordering(261) 00:17:48.932 fused_ordering(262) 00:17:48.932 fused_ordering(263) 00:17:48.932 fused_ordering(264) 00:17:48.932 fused_ordering(265) 00:17:48.932 fused_ordering(266) 00:17:48.932 fused_ordering(267) 00:17:48.932 fused_ordering(268) 00:17:48.932 fused_ordering(269) 00:17:48.932 fused_ordering(270) 00:17:48.932 fused_ordering(271) 00:17:48.932 fused_ordering(272) 00:17:48.932 fused_ordering(273) 00:17:48.932 fused_ordering(274) 00:17:48.932 fused_ordering(275) 00:17:48.932 fused_ordering(276) 00:17:48.932 fused_ordering(277) 00:17:48.932 fused_ordering(278) 00:17:48.932 fused_ordering(279) 00:17:48.932 fused_ordering(280) 00:17:48.932 fused_ordering(281) 00:17:48.932 fused_ordering(282) 00:17:48.932 fused_ordering(283) 00:17:48.932 fused_ordering(284) 00:17:48.932 fused_ordering(285) 00:17:48.932 fused_ordering(286) 00:17:48.932 fused_ordering(287) 00:17:48.932 fused_ordering(288) 00:17:48.932 fused_ordering(289) 00:17:48.932 fused_ordering(290) 00:17:48.932 fused_ordering(291) 00:17:48.932 fused_ordering(292) 00:17:48.932 fused_ordering(293) 00:17:48.932 fused_ordering(294) 00:17:48.932 fused_ordering(295) 00:17:48.932 fused_ordering(296) 00:17:48.932 fused_ordering(297) 00:17:48.932 fused_ordering(298) 00:17:48.932 fused_ordering(299) 00:17:48.932 fused_ordering(300) 00:17:48.932 fused_ordering(301) 00:17:48.932 fused_ordering(302) 00:17:48.932 fused_ordering(303) 00:17:48.932 fused_ordering(304) 00:17:48.932 fused_ordering(305) 00:17:48.932 fused_ordering(306) 00:17:48.932 fused_ordering(307) 00:17:48.932 fused_ordering(308) 00:17:48.932 fused_ordering(309) 00:17:48.932 fused_ordering(310) 00:17:48.932 fused_ordering(311) 00:17:48.932 fused_ordering(312) 00:17:48.932 fused_ordering(313) 00:17:48.932 fused_ordering(314) 00:17:48.932 fused_ordering(315) 00:17:48.932 fused_ordering(316) 00:17:48.932 fused_ordering(317) 00:17:48.932 fused_ordering(318) 00:17:48.932 fused_ordering(319) 00:17:48.932 fused_ordering(320) 00:17:48.932 fused_ordering(321) 00:17:48.932 fused_ordering(322) 00:17:48.932 fused_ordering(323) 00:17:48.932 fused_ordering(324) 00:17:48.932 fused_ordering(325) 00:17:48.932 fused_ordering(326) 00:17:48.932 fused_ordering(327) 00:17:48.932 fused_ordering(328) 00:17:48.932 fused_ordering(329) 00:17:48.932 fused_ordering(330) 00:17:48.932 fused_ordering(331) 00:17:48.932 fused_ordering(332) 00:17:48.932 fused_ordering(333) 00:17:48.932 fused_ordering(334) 00:17:48.932 fused_ordering(335) 00:17:48.932 fused_ordering(336) 00:17:48.932 fused_ordering(337) 00:17:48.932 fused_ordering(338) 00:17:48.932 fused_ordering(339) 00:17:48.932 fused_ordering(340) 00:17:48.932 fused_ordering(341) 00:17:48.932 fused_ordering(342) 00:17:48.932 fused_ordering(343) 00:17:48.932 fused_ordering(344) 00:17:48.932 fused_ordering(345) 00:17:48.932 fused_ordering(346) 00:17:48.932 fused_ordering(347) 00:17:48.932 fused_ordering(348) 00:17:48.932 fused_ordering(349) 00:17:48.932 fused_ordering(350) 00:17:48.932 fused_ordering(351) 00:17:48.932 fused_ordering(352) 00:17:48.932 fused_ordering(353) 00:17:48.932 fused_ordering(354) 00:17:48.932 fused_ordering(355) 00:17:48.932 fused_ordering(356) 00:17:48.932 fused_ordering(357) 00:17:48.932 fused_ordering(358) 00:17:48.932 fused_ordering(359) 00:17:48.932 fused_ordering(360) 00:17:48.932 fused_ordering(361) 00:17:48.932 fused_ordering(362) 00:17:48.932 fused_ordering(363) 00:17:48.932 fused_ordering(364) 00:17:48.932 fused_ordering(365) 00:17:48.932 fused_ordering(366) 00:17:48.932 fused_ordering(367) 00:17:48.932 fused_ordering(368) 00:17:48.932 fused_ordering(369) 00:17:48.932 fused_ordering(370) 00:17:48.932 fused_ordering(371) 00:17:48.932 fused_ordering(372) 00:17:48.932 fused_ordering(373) 00:17:48.932 fused_ordering(374) 00:17:48.932 fused_ordering(375) 00:17:48.932 fused_ordering(376) 00:17:48.932 fused_ordering(377) 00:17:48.932 fused_ordering(378) 00:17:48.932 fused_ordering(379) 00:17:48.932 fused_ordering(380) 00:17:48.932 fused_ordering(381) 00:17:48.932 fused_ordering(382) 00:17:48.932 fused_ordering(383) 00:17:48.932 fused_ordering(384) 00:17:48.932 fused_ordering(385) 00:17:48.932 fused_ordering(386) 00:17:48.932 fused_ordering(387) 00:17:48.932 fused_ordering(388) 00:17:48.932 fused_ordering(389) 00:17:48.932 fused_ordering(390) 00:17:48.932 fused_ordering(391) 00:17:48.932 fused_ordering(392) 00:17:48.932 fused_ordering(393) 00:17:48.932 fused_ordering(394) 00:17:48.932 fused_ordering(395) 00:17:48.932 fused_ordering(396) 00:17:48.932 fused_ordering(397) 00:17:48.932 fused_ordering(398) 00:17:48.932 fused_ordering(399) 00:17:48.932 fused_ordering(400) 00:17:48.932 fused_ordering(401) 00:17:48.932 fused_ordering(402) 00:17:48.932 fused_ordering(403) 00:17:48.932 fused_ordering(404) 00:17:48.932 fused_ordering(405) 00:17:48.932 fused_ordering(406) 00:17:48.932 fused_ordering(407) 00:17:48.932 fused_ordering(408) 00:17:48.932 fused_ordering(409) 00:17:48.932 fused_ordering(410) 00:17:49.195 fused_ordering(411) 00:17:49.195 fused_ordering(412) 00:17:49.195 fused_ordering(413) 00:17:49.195 fused_ordering(414) 00:17:49.195 fused_ordering(415) 00:17:49.195 fused_ordering(416) 00:17:49.195 fused_ordering(417) 00:17:49.195 fused_ordering(418) 00:17:49.195 fused_ordering(419) 00:17:49.195 fused_ordering(420) 00:17:49.195 fused_ordering(421) 00:17:49.195 fused_ordering(422) 00:17:49.195 fused_ordering(423) 00:17:49.195 fused_ordering(424) 00:17:49.195 fused_ordering(425) 00:17:49.195 fused_ordering(426) 00:17:49.195 fused_ordering(427) 00:17:49.195 fused_ordering(428) 00:17:49.195 fused_ordering(429) 00:17:49.195 fused_ordering(430) 00:17:49.195 fused_ordering(431) 00:17:49.195 fused_ordering(432) 00:17:49.195 fused_ordering(433) 00:17:49.195 fused_ordering(434) 00:17:49.195 fused_ordering(435) 00:17:49.195 fused_ordering(436) 00:17:49.195 fused_ordering(437) 00:17:49.195 fused_ordering(438) 00:17:49.195 fused_ordering(439) 00:17:49.195 fused_ordering(440) 00:17:49.195 fused_ordering(441) 00:17:49.195 fused_ordering(442) 00:17:49.195 fused_ordering(443) 00:17:49.195 fused_ordering(444) 00:17:49.195 fused_ordering(445) 00:17:49.195 fused_ordering(446) 00:17:49.195 fused_ordering(447) 00:17:49.195 fused_ordering(448) 00:17:49.195 fused_ordering(449) 00:17:49.195 fused_ordering(450) 00:17:49.195 fused_ordering(451) 00:17:49.195 fused_ordering(452) 00:17:49.195 fused_ordering(453) 00:17:49.195 fused_ordering(454) 00:17:49.195 fused_ordering(455) 00:17:49.195 fused_ordering(456) 00:17:49.195 fused_ordering(457) 00:17:49.195 fused_ordering(458) 00:17:49.195 fused_ordering(459) 00:17:49.195 fused_ordering(460) 00:17:49.195 fused_ordering(461) 00:17:49.195 fused_ordering(462) 00:17:49.195 fused_ordering(463) 00:17:49.195 fused_ordering(464) 00:17:49.195 fused_ordering(465) 00:17:49.195 fused_ordering(466) 00:17:49.195 fused_ordering(467) 00:17:49.195 fused_ordering(468) 00:17:49.195 fused_ordering(469) 00:17:49.195 fused_ordering(470) 00:17:49.195 fused_ordering(471) 00:17:49.195 fused_ordering(472) 00:17:49.195 fused_ordering(473) 00:17:49.195 fused_ordering(474) 00:17:49.195 fused_ordering(475) 00:17:49.195 fused_ordering(476) 00:17:49.195 fused_ordering(477) 00:17:49.195 fused_ordering(478) 00:17:49.195 fused_ordering(479) 00:17:49.195 fused_ordering(480) 00:17:49.195 fused_ordering(481) 00:17:49.195 fused_ordering(482) 00:17:49.195 fused_ordering(483) 00:17:49.195 fused_ordering(484) 00:17:49.195 fused_ordering(485) 00:17:49.195 fused_ordering(486) 00:17:49.195 fused_ordering(487) 00:17:49.195 fused_ordering(488) 00:17:49.195 fused_ordering(489) 00:17:49.195 fused_ordering(490) 00:17:49.195 fused_ordering(491) 00:17:49.195 fused_ordering(492) 00:17:49.195 fused_ordering(493) 00:17:49.195 fused_ordering(494) 00:17:49.195 fused_ordering(495) 00:17:49.195 fused_ordering(496) 00:17:49.195 fused_ordering(497) 00:17:49.195 fused_ordering(498) 00:17:49.195 fused_ordering(499) 00:17:49.195 fused_ordering(500) 00:17:49.195 fused_ordering(501) 00:17:49.195 fused_ordering(502) 00:17:49.195 fused_ordering(503) 00:17:49.195 fused_ordering(504) 00:17:49.195 fused_ordering(505) 00:17:49.195 fused_ordering(506) 00:17:49.195 fused_ordering(507) 00:17:49.195 fused_ordering(508) 00:17:49.195 fused_ordering(509) 00:17:49.195 fused_ordering(510) 00:17:49.195 fused_ordering(511) 00:17:49.195 fused_ordering(512) 00:17:49.195 fused_ordering(513) 00:17:49.195 fused_ordering(514) 00:17:49.195 fused_ordering(515) 00:17:49.195 fused_ordering(516) 00:17:49.195 fused_ordering(517) 00:17:49.195 fused_ordering(518) 00:17:49.195 fused_ordering(519) 00:17:49.195 fused_ordering(520) 00:17:49.195 fused_ordering(521) 00:17:49.195 fused_ordering(522) 00:17:49.195 fused_ordering(523) 00:17:49.195 fused_ordering(524) 00:17:49.195 fused_ordering(525) 00:17:49.195 fused_ordering(526) 00:17:49.195 fused_ordering(527) 00:17:49.195 fused_ordering(528) 00:17:49.195 fused_ordering(529) 00:17:49.195 fused_ordering(530) 00:17:49.195 fused_ordering(531) 00:17:49.195 fused_ordering(532) 00:17:49.195 fused_ordering(533) 00:17:49.195 fused_ordering(534) 00:17:49.195 fused_ordering(535) 00:17:49.195 fused_ordering(536) 00:17:49.195 fused_ordering(537) 00:17:49.195 fused_ordering(538) 00:17:49.195 fused_ordering(539) 00:17:49.195 fused_ordering(540) 00:17:49.195 fused_ordering(541) 00:17:49.195 fused_ordering(542) 00:17:49.195 fused_ordering(543) 00:17:49.195 fused_ordering(544) 00:17:49.195 fused_ordering(545) 00:17:49.195 fused_ordering(546) 00:17:49.195 fused_ordering(547) 00:17:49.195 fused_ordering(548) 00:17:49.195 fused_ordering(549) 00:17:49.195 fused_ordering(550) 00:17:49.195 fused_ordering(551) 00:17:49.195 fused_ordering(552) 00:17:49.195 fused_ordering(553) 00:17:49.195 fused_ordering(554) 00:17:49.195 fused_ordering(555) 00:17:49.195 fused_ordering(556) 00:17:49.195 fused_ordering(557) 00:17:49.195 fused_ordering(558) 00:17:49.195 fused_ordering(559) 00:17:49.195 fused_ordering(560) 00:17:49.195 fused_ordering(561) 00:17:49.195 fused_ordering(562) 00:17:49.195 fused_ordering(563) 00:17:49.195 fused_ordering(564) 00:17:49.195 fused_ordering(565) 00:17:49.195 fused_ordering(566) 00:17:49.195 fused_ordering(567) 00:17:49.195 fused_ordering(568) 00:17:49.195 fused_ordering(569) 00:17:49.195 fused_ordering(570) 00:17:49.195 fused_ordering(571) 00:17:49.195 fused_ordering(572) 00:17:49.195 fused_ordering(573) 00:17:49.195 fused_ordering(574) 00:17:49.195 fused_ordering(575) 00:17:49.195 fused_ordering(576) 00:17:49.195 fused_ordering(577) 00:17:49.195 fused_ordering(578) 00:17:49.195 fused_ordering(579) 00:17:49.195 fused_ordering(580) 00:17:49.195 fused_ordering(581) 00:17:49.195 fused_ordering(582) 00:17:49.195 fused_ordering(583) 00:17:49.195 fused_ordering(584) 00:17:49.195 fused_ordering(585) 00:17:49.195 fused_ordering(586) 00:17:49.195 fused_ordering(587) 00:17:49.195 fused_ordering(588) 00:17:49.195 fused_ordering(589) 00:17:49.195 fused_ordering(590) 00:17:49.195 fused_ordering(591) 00:17:49.195 fused_ordering(592) 00:17:49.195 fused_ordering(593) 00:17:49.195 fused_ordering(594) 00:17:49.196 fused_ordering(595) 00:17:49.196 fused_ordering(596) 00:17:49.196 fused_ordering(597) 00:17:49.196 fused_ordering(598) 00:17:49.196 fused_ordering(599) 00:17:49.196 fused_ordering(600) 00:17:49.196 fused_ordering(601) 00:17:49.196 fused_ordering(602) 00:17:49.196 fused_ordering(603) 00:17:49.196 fused_ordering(604) 00:17:49.196 fused_ordering(605) 00:17:49.196 fused_ordering(606) 00:17:49.196 fused_ordering(607) 00:17:49.196 fused_ordering(608) 00:17:49.196 fused_ordering(609) 00:17:49.196 fused_ordering(610) 00:17:49.196 fused_ordering(611) 00:17:49.196 fused_ordering(612) 00:17:49.196 fused_ordering(613) 00:17:49.196 fused_ordering(614) 00:17:49.196 fused_ordering(615) 00:17:49.767 fused_ordering(616) 00:17:49.767 fused_ordering(617) 00:17:49.767 fused_ordering(618) 00:17:49.767 fused_ordering(619) 00:17:49.767 fused_ordering(620) 00:17:49.767 fused_ordering(621) 00:17:49.767 fused_ordering(622) 00:17:49.767 fused_ordering(623) 00:17:49.767 fused_ordering(624) 00:17:49.767 fused_ordering(625) 00:17:49.767 fused_ordering(626) 00:17:49.767 fused_ordering(627) 00:17:49.767 fused_ordering(628) 00:17:49.767 fused_ordering(629) 00:17:49.767 fused_ordering(630) 00:17:49.767 fused_ordering(631) 00:17:49.767 fused_ordering(632) 00:17:49.767 fused_ordering(633) 00:17:49.767 fused_ordering(634) 00:17:49.767 fused_ordering(635) 00:17:49.767 fused_ordering(636) 00:17:49.767 fused_ordering(637) 00:17:49.767 fused_ordering(638) 00:17:49.767 fused_ordering(639) 00:17:49.767 fused_ordering(640) 00:17:49.767 fused_ordering(641) 00:17:49.767 fused_ordering(642) 00:17:49.767 fused_ordering(643) 00:17:49.767 fused_ordering(644) 00:17:49.767 fused_ordering(645) 00:17:49.767 fused_ordering(646) 00:17:49.767 fused_ordering(647) 00:17:49.767 fused_ordering(648) 00:17:49.767 fused_ordering(649) 00:17:49.767 fused_ordering(650) 00:17:49.767 fused_ordering(651) 00:17:49.767 fused_ordering(652) 00:17:49.767 fused_ordering(653) 00:17:49.767 fused_ordering(654) 00:17:49.767 fused_ordering(655) 00:17:49.767 fused_ordering(656) 00:17:49.767 fused_ordering(657) 00:17:49.767 fused_ordering(658) 00:17:49.767 fused_ordering(659) 00:17:49.767 fused_ordering(660) 00:17:49.767 fused_ordering(661) 00:17:49.767 fused_ordering(662) 00:17:49.767 fused_ordering(663) 00:17:49.767 fused_ordering(664) 00:17:49.767 fused_ordering(665) 00:17:49.767 fused_ordering(666) 00:17:49.767 fused_ordering(667) 00:17:49.767 fused_ordering(668) 00:17:49.767 fused_ordering(669) 00:17:49.767 fused_ordering(670) 00:17:49.767 fused_ordering(671) 00:17:49.767 fused_ordering(672) 00:17:49.767 fused_ordering(673) 00:17:49.767 fused_ordering(674) 00:17:49.767 fused_ordering(675) 00:17:49.767 fused_ordering(676) 00:17:49.767 fused_ordering(677) 00:17:49.767 fused_ordering(678) 00:17:49.767 fused_ordering(679) 00:17:49.767 fused_ordering(680) 00:17:49.767 fused_ordering(681) 00:17:49.767 fused_ordering(682) 00:17:49.767 fused_ordering(683) 00:17:49.768 fused_ordering(684) 00:17:49.768 fused_ordering(685) 00:17:49.768 fused_ordering(686) 00:17:49.768 fused_ordering(687) 00:17:49.768 fused_ordering(688) 00:17:49.768 fused_ordering(689) 00:17:49.768 fused_ordering(690) 00:17:49.768 fused_ordering(691) 00:17:49.768 fused_ordering(692) 00:17:49.768 fused_ordering(693) 00:17:49.768 fused_ordering(694) 00:17:49.768 fused_ordering(695) 00:17:49.768 fused_ordering(696) 00:17:49.768 fused_ordering(697) 00:17:49.768 fused_ordering(698) 00:17:49.768 fused_ordering(699) 00:17:49.768 fused_ordering(700) 00:17:49.768 fused_ordering(701) 00:17:49.768 fused_ordering(702) 00:17:49.768 fused_ordering(703) 00:17:49.768 fused_ordering(704) 00:17:49.768 fused_ordering(705) 00:17:49.768 fused_ordering(706) 00:17:49.768 fused_ordering(707) 00:17:49.768 fused_ordering(708) 00:17:49.768 fused_ordering(709) 00:17:49.768 fused_ordering(710) 00:17:49.768 fused_ordering(711) 00:17:49.768 fused_ordering(712) 00:17:49.768 fused_ordering(713) 00:17:49.768 fused_ordering(714) 00:17:49.768 fused_ordering(715) 00:17:49.768 fused_ordering(716) 00:17:49.768 fused_ordering(717) 00:17:49.768 fused_ordering(718) 00:17:49.768 fused_ordering(719) 00:17:49.768 fused_ordering(720) 00:17:49.768 fused_ordering(721) 00:17:49.768 fused_ordering(722) 00:17:49.768 fused_ordering(723) 00:17:49.768 fused_ordering(724) 00:17:49.768 fused_ordering(725) 00:17:49.768 fused_ordering(726) 00:17:49.768 fused_ordering(727) 00:17:49.768 fused_ordering(728) 00:17:49.768 fused_ordering(729) 00:17:49.768 fused_ordering(730) 00:17:49.768 fused_ordering(731) 00:17:49.768 fused_ordering(732) 00:17:49.768 fused_ordering(733) 00:17:49.768 fused_ordering(734) 00:17:49.768 fused_ordering(735) 00:17:49.768 fused_ordering(736) 00:17:49.768 fused_ordering(737) 00:17:49.768 fused_ordering(738) 00:17:49.768 fused_ordering(739) 00:17:49.768 fused_ordering(740) 00:17:49.768 fused_ordering(741) 00:17:49.768 fused_ordering(742) 00:17:49.768 fused_ordering(743) 00:17:49.768 fused_ordering(744) 00:17:49.768 fused_ordering(745) 00:17:49.768 fused_ordering(746) 00:17:49.768 fused_ordering(747) 00:17:49.768 fused_ordering(748) 00:17:49.768 fused_ordering(749) 00:17:49.768 fused_ordering(750) 00:17:49.768 fused_ordering(751) 00:17:49.768 fused_ordering(752) 00:17:49.768 fused_ordering(753) 00:17:49.768 fused_ordering(754) 00:17:49.768 fused_ordering(755) 00:17:49.768 fused_ordering(756) 00:17:49.768 fused_ordering(757) 00:17:49.768 fused_ordering(758) 00:17:49.768 fused_ordering(759) 00:17:49.768 fused_ordering(760) 00:17:49.768 fused_ordering(761) 00:17:49.768 fused_ordering(762) 00:17:49.768 fused_ordering(763) 00:17:49.768 fused_ordering(764) 00:17:49.768 fused_ordering(765) 00:17:49.768 fused_ordering(766) 00:17:49.768 fused_ordering(767) 00:17:49.768 fused_ordering(768) 00:17:49.768 fused_ordering(769) 00:17:49.768 fused_ordering(770) 00:17:49.768 fused_ordering(771) 00:17:49.768 fused_ordering(772) 00:17:49.768 fused_ordering(773) 00:17:49.768 fused_ordering(774) 00:17:49.768 fused_ordering(775) 00:17:49.768 fused_ordering(776) 00:17:49.768 fused_ordering(777) 00:17:49.768 fused_ordering(778) 00:17:49.768 fused_ordering(779) 00:17:49.768 fused_ordering(780) 00:17:49.768 fused_ordering(781) 00:17:49.768 fused_ordering(782) 00:17:49.768 fused_ordering(783) 00:17:49.768 fused_ordering(784) 00:17:49.768 fused_ordering(785) 00:17:49.768 fused_ordering(786) 00:17:49.768 fused_ordering(787) 00:17:49.768 fused_ordering(788) 00:17:49.768 fused_ordering(789) 00:17:49.768 fused_ordering(790) 00:17:49.768 fused_ordering(791) 00:17:49.768 fused_ordering(792) 00:17:49.768 fused_ordering(793) 00:17:49.768 fused_ordering(794) 00:17:49.768 fused_ordering(795) 00:17:49.768 fused_ordering(796) 00:17:49.768 fused_ordering(797) 00:17:49.768 fused_ordering(798) 00:17:49.768 fused_ordering(799) 00:17:49.768 fused_ordering(800) 00:17:49.768 fused_ordering(801) 00:17:49.768 fused_ordering(802) 00:17:49.768 fused_ordering(803) 00:17:49.768 fused_ordering(804) 00:17:49.768 fused_ordering(805) 00:17:49.768 fused_ordering(806) 00:17:49.768 fused_ordering(807) 00:17:49.768 fused_ordering(808) 00:17:49.768 fused_ordering(809) 00:17:49.768 fused_ordering(810) 00:17:49.768 fused_ordering(811) 00:17:49.768 fused_ordering(812) 00:17:49.768 fused_ordering(813) 00:17:49.768 fused_ordering(814) 00:17:49.768 fused_ordering(815) 00:17:49.768 fused_ordering(816) 00:17:49.768 fused_ordering(817) 00:17:49.768 fused_ordering(818) 00:17:49.768 fused_ordering(819) 00:17:49.768 fused_ordering(820) 00:17:50.340 fused_ordering(821) 00:17:50.340 fused_ordering(822) 00:17:50.340 fused_ordering(823) 00:17:50.340 fused_ordering(824) 00:17:50.340 fused_ordering(825) 00:17:50.340 fused_ordering(826) 00:17:50.340 fused_ordering(827) 00:17:50.340 fused_ordering(828) 00:17:50.340 fused_ordering(829) 00:17:50.340 fused_ordering(830) 00:17:50.340 fused_ordering(831) 00:17:50.340 fused_ordering(832) 00:17:50.340 fused_ordering(833) 00:17:50.340 fused_ordering(834) 00:17:50.340 fused_ordering(835) 00:17:50.340 fused_ordering(836) 00:17:50.340 fused_ordering(837) 00:17:50.340 fused_ordering(838) 00:17:50.340 fused_ordering(839) 00:17:50.340 fused_ordering(840) 00:17:50.340 fused_ordering(841) 00:17:50.340 fused_ordering(842) 00:17:50.340 fused_ordering(843) 00:17:50.340 fused_ordering(844) 00:17:50.340 fused_ordering(845) 00:17:50.340 fused_ordering(846) 00:17:50.340 fused_ordering(847) 00:17:50.340 fused_ordering(848) 00:17:50.340 fused_ordering(849) 00:17:50.340 fused_ordering(850) 00:17:50.340 fused_ordering(851) 00:17:50.340 fused_ordering(852) 00:17:50.340 fused_ordering(853) 00:17:50.340 fused_ordering(854) 00:17:50.340 fused_ordering(855) 00:17:50.340 fused_ordering(856) 00:17:50.340 fused_ordering(857) 00:17:50.340 fused_ordering(858) 00:17:50.340 fused_ordering(859) 00:17:50.340 fused_ordering(860) 00:17:50.340 fused_ordering(861) 00:17:50.340 fused_ordering(862) 00:17:50.340 fused_ordering(863) 00:17:50.340 fused_ordering(864) 00:17:50.340 fused_ordering(865) 00:17:50.341 fused_ordering(866) 00:17:50.341 fused_ordering(867) 00:17:50.341 fused_ordering(868) 00:17:50.341 fused_ordering(869) 00:17:50.341 fused_ordering(870) 00:17:50.341 fused_ordering(871) 00:17:50.341 fused_ordering(872) 00:17:50.341 fused_ordering(873) 00:17:50.341 fused_ordering(874) 00:17:50.341 fused_ordering(875) 00:17:50.341 fused_ordering(876) 00:17:50.341 fused_ordering(877) 00:17:50.341 fused_ordering(878) 00:17:50.341 fused_ordering(879) 00:17:50.341 fused_ordering(880) 00:17:50.341 fused_ordering(881) 00:17:50.341 fused_ordering(882) 00:17:50.341 fused_ordering(883) 00:17:50.341 fused_ordering(884) 00:17:50.341 fused_ordering(885) 00:17:50.341 fused_ordering(886) 00:17:50.341 fused_ordering(887) 00:17:50.341 fused_ordering(888) 00:17:50.341 fused_ordering(889) 00:17:50.341 fused_ordering(890) 00:17:50.341 fused_ordering(891) 00:17:50.341 fused_ordering(892) 00:17:50.341 fused_ordering(893) 00:17:50.341 fused_ordering(894) 00:17:50.341 fused_ordering(895) 00:17:50.341 fused_ordering(896) 00:17:50.341 fused_ordering(897) 00:17:50.341 fused_ordering(898) 00:17:50.341 fused_ordering(899) 00:17:50.341 fused_ordering(900) 00:17:50.341 fused_ordering(901) 00:17:50.341 fused_ordering(902) 00:17:50.341 fused_ordering(903) 00:17:50.341 fused_ordering(904) 00:17:50.341 fused_ordering(905) 00:17:50.341 fused_ordering(906) 00:17:50.341 fused_ordering(907) 00:17:50.341 fused_ordering(908) 00:17:50.341 fused_ordering(909) 00:17:50.341 fused_ordering(910) 00:17:50.341 fused_ordering(911) 00:17:50.341 fused_ordering(912) 00:17:50.341 fused_ordering(913) 00:17:50.341 fused_ordering(914) 00:17:50.341 fused_ordering(915) 00:17:50.341 fused_ordering(916) 00:17:50.341 fused_ordering(917) 00:17:50.341 fused_ordering(918) 00:17:50.341 fused_ordering(919) 00:17:50.341 fused_ordering(920) 00:17:50.341 fused_ordering(921) 00:17:50.341 fused_ordering(922) 00:17:50.341 fused_ordering(923) 00:17:50.341 fused_ordering(924) 00:17:50.341 fused_ordering(925) 00:17:50.341 fused_ordering(926) 00:17:50.341 fused_ordering(927) 00:17:50.341 fused_ordering(928) 00:17:50.341 fused_ordering(929) 00:17:50.341 fused_ordering(930) 00:17:50.341 fused_ordering(931) 00:17:50.341 fused_ordering(932) 00:17:50.341 fused_ordering(933) 00:17:50.341 fused_ordering(934) 00:17:50.341 fused_ordering(935) 00:17:50.341 fused_ordering(936) 00:17:50.341 fused_ordering(937) 00:17:50.341 fused_ordering(938) 00:17:50.341 fused_ordering(939) 00:17:50.341 fused_ordering(940) 00:17:50.341 fused_ordering(941) 00:17:50.341 fused_ordering(942) 00:17:50.341 fused_ordering(943) 00:17:50.341 fused_ordering(944) 00:17:50.341 fused_ordering(945) 00:17:50.341 fused_ordering(946) 00:17:50.341 fused_ordering(947) 00:17:50.341 fused_ordering(948) 00:17:50.341 fused_ordering(949) 00:17:50.341 fused_ordering(950) 00:17:50.341 fused_ordering(951) 00:17:50.341 fused_ordering(952) 00:17:50.341 fused_ordering(953) 00:17:50.341 fused_ordering(954) 00:17:50.341 fused_ordering(955) 00:17:50.341 fused_ordering(956) 00:17:50.341 fused_ordering(957) 00:17:50.341 fused_ordering(958) 00:17:50.341 fused_ordering(959) 00:17:50.341 fused_ordering(960) 00:17:50.341 fused_ordering(961) 00:17:50.341 fused_ordering(962) 00:17:50.341 fused_ordering(963) 00:17:50.341 fused_ordering(964) 00:17:50.341 fused_ordering(965) 00:17:50.341 fused_ordering(966) 00:17:50.341 fused_ordering(967) 00:17:50.341 fused_ordering(968) 00:17:50.341 fused_ordering(969) 00:17:50.341 fused_ordering(970) 00:17:50.341 fused_ordering(971) 00:17:50.341 fused_ordering(972) 00:17:50.341 fused_ordering(973) 00:17:50.341 fused_ordering(974) 00:17:50.341 fused_ordering(975) 00:17:50.341 fused_ordering(976) 00:17:50.341 fused_ordering(977) 00:17:50.341 fused_ordering(978) 00:17:50.341 fused_ordering(979) 00:17:50.341 fused_ordering(980) 00:17:50.341 fused_ordering(981) 00:17:50.341 fused_ordering(982) 00:17:50.341 fused_ordering(983) 00:17:50.341 fused_ordering(984) 00:17:50.341 fused_ordering(985) 00:17:50.341 fused_ordering(986) 00:17:50.341 fused_ordering(987) 00:17:50.341 fused_ordering(988) 00:17:50.341 fused_ordering(989) 00:17:50.341 fused_ordering(990) 00:17:50.341 fused_ordering(991) 00:17:50.341 fused_ordering(992) 00:17:50.341 fused_ordering(993) 00:17:50.341 fused_ordering(994) 00:17:50.341 fused_ordering(995) 00:17:50.341 fused_ordering(996) 00:17:50.341 fused_ordering(997) 00:17:50.341 fused_ordering(998) 00:17:50.341 fused_ordering(999) 00:17:50.341 fused_ordering(1000) 00:17:50.341 fused_ordering(1001) 00:17:50.341 fused_ordering(1002) 00:17:50.341 fused_ordering(1003) 00:17:50.341 fused_ordering(1004) 00:17:50.341 fused_ordering(1005) 00:17:50.341 fused_ordering(1006) 00:17:50.341 fused_ordering(1007) 00:17:50.341 fused_ordering(1008) 00:17:50.341 fused_ordering(1009) 00:17:50.341 fused_ordering(1010) 00:17:50.341 fused_ordering(1011) 00:17:50.341 fused_ordering(1012) 00:17:50.341 fused_ordering(1013) 00:17:50.341 fused_ordering(1014) 00:17:50.341 fused_ordering(1015) 00:17:50.341 fused_ordering(1016) 00:17:50.341 fused_ordering(1017) 00:17:50.341 fused_ordering(1018) 00:17:50.341 fused_ordering(1019) 00:17:50.341 fused_ordering(1020) 00:17:50.341 fused_ordering(1021) 00:17:50.341 fused_ordering(1022) 00:17:50.341 fused_ordering(1023) 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:50.341 rmmod nvme_tcp 00:17:50.341 rmmod nvme_fabrics 00:17:50.341 rmmod nvme_keyring 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2773047 ']' 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2773047 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 2773047 ']' 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 2773047 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2773047 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2773047' 00:17:50.341 killing process with pid 2773047 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 2773047 00:17:50.341 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 2773047 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.603 06:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.517 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:52.517 00:17:52.517 real 0m13.307s 00:17:52.517 user 0m6.931s 00:17:52.517 sys 0m7.109s 00:17:52.517 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:52.517 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:52.517 ************************************ 00:17:52.517 END TEST nvmf_fused_ordering 00:17:52.517 ************************************ 00:17:52.778 06:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:52.778 06:28:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:52.778 06:28:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:52.778 06:28:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.778 ************************************ 00:17:52.778 START TEST nvmf_ns_masking 00:17:52.778 ************************************ 00:17:52.778 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:52.778 * Looking for test storage... 00:17:52.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.778 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:52.778 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:17:52.778 06:28:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:52.778 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:52.778 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:52.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.779 --rc genhtml_branch_coverage=1 00:17:52.779 --rc genhtml_function_coverage=1 00:17:52.779 --rc genhtml_legend=1 00:17:52.779 --rc geninfo_all_blocks=1 00:17:52.779 --rc geninfo_unexecuted_blocks=1 00:17:52.779 00:17:52.779 ' 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:52.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.779 --rc genhtml_branch_coverage=1 00:17:52.779 --rc genhtml_function_coverage=1 00:17:52.779 --rc genhtml_legend=1 00:17:52.779 --rc geninfo_all_blocks=1 00:17:52.779 --rc geninfo_unexecuted_blocks=1 00:17:52.779 00:17:52.779 ' 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:52.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.779 --rc genhtml_branch_coverage=1 00:17:52.779 --rc genhtml_function_coverage=1 00:17:52.779 --rc genhtml_legend=1 00:17:52.779 --rc geninfo_all_blocks=1 00:17:52.779 --rc geninfo_unexecuted_blocks=1 00:17:52.779 00:17:52.779 ' 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:52.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.779 --rc genhtml_branch_coverage=1 00:17:52.779 --rc genhtml_function_coverage=1 00:17:52.779 --rc genhtml_legend=1 00:17:52.779 --rc geninfo_all_blocks=1 00:17:52.779 --rc geninfo_unexecuted_blocks=1 00:17:52.779 00:17:52.779 ' 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.779 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0f0921ef-4175-4a3b-bb86-bd77be9d6472 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9a108d82-9444-4ced-843b-ea5b64a211b3 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=98267b24-9a26-46ee-8c30-f79c03dcbc85 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:53.041 06:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.186 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:01.187 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:01.187 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:01.187 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:01.187 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:01.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:18:01.187 00:18:01.187 --- 10.0.0.2 ping statistics --- 00:18:01.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.187 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:18:01.187 00:18:01.187 --- 10.0.0.1 ping statistics --- 00:18:01.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.187 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2777861 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2777861 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2777861 ']' 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:01.187 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:01.187 [2024-11-20 06:28:20.662819] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:18:01.187 [2024-11-20 06:28:20.662882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.187 [2024-11-20 06:28:20.762708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.188 [2024-11-20 06:28:20.813366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.188 [2024-11-20 06:28:20.813419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.188 [2024-11-20 06:28:20.813427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.188 [2024-11-20 06:28:20.813434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.188 [2024-11-20 06:28:20.813440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.188 [2024-11-20 06:28:20.814252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.449 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:01.449 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:18:01.449 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.449 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:01.449 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:01.449 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.449 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:01.449 [2024-11-20 06:28:21.698365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.710 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:01.710 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:01.710 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:01.710 Malloc1 00:18:01.710 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:01.970 Malloc2 00:18:01.970 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:02.231 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:02.492 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.492 [2024-11-20 06:28:22.700694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.492 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:02.492 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 98267b24-9a26-46ee-8c30-f79c03dcbc85 -a 10.0.0.2 -s 4420 -i 4 00:18:02.754 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:02.754 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:02.754 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.754 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:02.754 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:05.301 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:05.301 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:05.301 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:05.301 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:05.301 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.301 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:05.301 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:05.301 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:05.301 [ 0]:0x1 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2095b8cd75244aa929d4a049beb30b5 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2095b8cd75244aa929d4a049beb30b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:05.301 [ 0]:0x1 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2095b8cd75244aa929d4a049beb30b5 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2095b8cd75244aa929d4a049beb30b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:05.301 [ 1]:0x2 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d012c8e546b468ca9a0e275ef2a15ac 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d012c8e546b468ca9a0e275ef2a15ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.301 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:05.561 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:05.821 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:05.821 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 98267b24-9a26-46ee-8c30-f79c03dcbc85 -a 10.0.0.2 -s 4420 -i 4 00:18:05.821 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:05.821 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:05.821 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.821 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:18:05.821 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:18:05.821 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:07.734 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:07.734 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:07.734 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:07.995 [ 0]:0x2 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d012c8e546b468ca9a0e275ef2a15ac 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d012c8e546b468ca9a0e275ef2a15ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.995 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:08.265 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:08.265 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.265 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:08.265 [ 0]:0x1 00:18:08.265 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:08.265 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.265 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2095b8cd75244aa929d4a049beb30b5 00:18:08.265 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2095b8cd75244aa929d4a049beb30b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.265 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:08.266 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.266 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:08.266 [ 1]:0x2 00:18:08.266 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:08.266 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.266 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d012c8e546b468ca9a0e275ef2a15ac 00:18:08.266 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d012c8e546b468ca9a0e275ef2a15ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.266 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:08.529 [ 0]:0x2 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d012c8e546b468ca9a0e275ef2a15ac 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d012c8e546b468ca9a0e275ef2a15ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:08.529 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:08.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.790 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:08.790 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:08.790 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 98267b24-9a26-46ee-8c30-f79c03dcbc85 -a 10.0.0.2 -s 4420 -i 4 00:18:09.051 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:09.051 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:09.051 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:09.051 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:18:09.051 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:18:09.051 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:10.965 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:10.965 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:10.965 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:10.965 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:18:10.965 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:10.965 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:10.965 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:10.965 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:11.225 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:11.225 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:11.225 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:11.225 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.225 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:11.226 [ 0]:0x1 00:18:11.226 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:11.226 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.226 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d2095b8cd75244aa929d4a049beb30b5 00:18:11.226 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d2095b8cd75244aa929d4a049beb30b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.226 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:11.226 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.226 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:11.226 [ 1]:0x2 00:18:11.226 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:11.226 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d012c8e546b468ca9a0e275ef2a15ac 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d012c8e546b468ca9a0e275ef2a15ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.487 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:11.488 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.488 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:11.488 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:11.488 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:11.488 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:11.488 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:11.488 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:11.488 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.774 [ 0]:0x2 00:18:11.774 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:11.774 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.774 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d012c8e546b468ca9a0e275ef2a15ac 00:18:11.774 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d012c8e546b468ca9a0e275ef2a15ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.774 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:11.774 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:11.775 [2024-11-20 06:28:31.982382] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:11.775 request: 00:18:11.775 { 00:18:11.775 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.775 "nsid": 2, 00:18:11.775 "host": "nqn.2016-06.io.spdk:host1", 00:18:11.775 "method": "nvmf_ns_remove_host", 00:18:11.775 "req_id": 1 00:18:11.775 } 00:18:11.775 Got JSON-RPC error response 00:18:11.775 response: 00:18:11.775 { 00:18:11.775 "code": -32602, 00:18:11.775 "message": "Invalid parameters" 00:18:11.775 } 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.775 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:11.775 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.775 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:11.775 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.775 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:11.775 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:11.775 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:12.068 [ 0]:0x2 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d012c8e546b468ca9a0e275ef2a15ac 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d012c8e546b468ca9a0e275ef2a15ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:12.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2780352 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2780352 /var/tmp/host.sock 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2780352 ']' 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:12.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:12.068 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:12.068 [2024-11-20 06:28:32.226049] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:18:12.068 [2024-11-20 06:28:32.226101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780352 ] 00:18:12.068 [2024-11-20 06:28:32.314986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.368 [2024-11-20 06:28:32.353951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.939 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:12.939 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:18:12.939 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.939 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:13.201 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0f0921ef-4175-4a3b-bb86-bd77be9d6472 00:18:13.201 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:13.201 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0F0921EF41754A3BBB86BD77BE9D6472 -i 00:18:13.461 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9a108d82-9444-4ced-843b-ea5b64a211b3 00:18:13.461 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:13.461 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9A108D8294444CED843BEA5B64A211B3 -i 00:18:13.461 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:13.722 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:13.983 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:13.983 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:14.244 nvme0n1 00:18:14.244 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:14.244 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:14.504 nvme1n2 00:18:14.504 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:14.504 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:14.504 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:14.504 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:14.504 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:14.764 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:14.764 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:14.764 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:14.764 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:15.024 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0f0921ef-4175-4a3b-bb86-bd77be9d6472 == \0\f\0\9\2\1\e\f\-\4\1\7\5\-\4\a\3\b\-\b\b\8\6\-\b\d\7\7\b\e\9\d\6\4\7\2 ]] 00:18:15.024 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:15.024 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:15.024 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:15.024 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9a108d82-9444-4ced-843b-ea5b64a211b3 == \9\a\1\0\8\d\8\2\-\9\4\4\4\-\4\c\e\d\-\8\4\3\b\-\e\a\5\b\6\4\a\2\1\1\b\3 ]] 00:18:15.024 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:15.285 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:15.546 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 0f0921ef-4175-4a3b-bb86-bd77be9d6472 00:18:15.546 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:15.546 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0F0921EF41754A3BBB86BD77BE9D6472 00:18:15.546 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:15.546 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0F0921EF41754A3BBB86BD77BE9D6472 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0F0921EF41754A3BBB86BD77BE9D6472 00:18:15.547 [2024-11-20 06:28:35.760403] bdev.c:8477:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:15.547 [2024-11-20 06:28:35.760434] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:15.547 [2024-11-20 06:28:35.760442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.547 request: 00:18:15.547 { 00:18:15.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.547 "namespace": { 00:18:15.547 "bdev_name": "invalid", 00:18:15.547 "nsid": 1, 00:18:15.547 "nguid": "0F0921EF41754A3BBB86BD77BE9D6472", 00:18:15.547 "no_auto_visible": false 00:18:15.547 }, 00:18:15.547 "method": "nvmf_subsystem_add_ns", 00:18:15.547 "req_id": 1 00:18:15.547 } 00:18:15.547 Got JSON-RPC error response 00:18:15.547 response: 00:18:15.547 { 00:18:15.547 "code": -32602, 00:18:15.547 "message": "Invalid parameters" 00:18:15.547 } 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 0f0921ef-4175-4a3b-bb86-bd77be9d6472 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:15.547 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0F0921EF41754A3BBB86BD77BE9D6472 -i 00:18:15.807 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:17.721 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:17.721 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:17.721 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2780352 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2780352 ']' 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2780352 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2780352 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2780352' 00:18:17.981 killing process with pid 2780352 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2780352 00:18:17.981 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2780352 00:18:18.241 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:18.501 rmmod nvme_tcp 00:18:18.501 rmmod nvme_fabrics 00:18:18.501 rmmod nvme_keyring 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2777861 ']' 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2777861 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2777861 ']' 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2777861 00:18:18.501 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:18:18.502 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:18.502 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2777861 00:18:18.502 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:18.502 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:18.502 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2777861' 00:18:18.502 killing process with pid 2777861 00:18:18.502 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2777861 00:18:18.502 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2777861 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.762 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.679 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:20.679 00:18:20.679 real 0m28.087s 00:18:20.679 user 0m31.907s 00:18:20.679 sys 0m8.220s 00:18:20.679 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:20.679 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:20.679 ************************************ 00:18:20.679 END TEST nvmf_ns_masking 00:18:20.679 ************************************ 00:18:20.941 06:28:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:20.941 06:28:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:20.941 06:28:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:20.941 06:28:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:20.941 06:28:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:20.941 ************************************ 00:18:20.941 START TEST nvmf_nvme_cli 00:18:20.941 ************************************ 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:20.941 * Looking for test storage... 00:18:20.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:20.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.941 --rc genhtml_branch_coverage=1 00:18:20.941 --rc genhtml_function_coverage=1 00:18:20.941 --rc genhtml_legend=1 00:18:20.941 --rc geninfo_all_blocks=1 00:18:20.941 --rc geninfo_unexecuted_blocks=1 00:18:20.941 00:18:20.941 ' 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:20.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.941 --rc genhtml_branch_coverage=1 00:18:20.941 --rc genhtml_function_coverage=1 00:18:20.941 --rc genhtml_legend=1 00:18:20.941 --rc geninfo_all_blocks=1 00:18:20.941 --rc geninfo_unexecuted_blocks=1 00:18:20.941 00:18:20.941 ' 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:20.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.941 --rc genhtml_branch_coverage=1 00:18:20.941 --rc genhtml_function_coverage=1 00:18:20.941 --rc genhtml_legend=1 00:18:20.941 --rc geninfo_all_blocks=1 00:18:20.941 --rc geninfo_unexecuted_blocks=1 00:18:20.941 00:18:20.941 ' 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:20.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.941 --rc genhtml_branch_coverage=1 00:18:20.941 --rc genhtml_function_coverage=1 00:18:20.941 --rc genhtml_legend=1 00:18:20.941 --rc geninfo_all_blocks=1 00:18:20.941 --rc geninfo_unexecuted_blocks=1 00:18:20.941 00:18:20.941 ' 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.941 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:21.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:21.202 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:21.203 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:29.345 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:29.345 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:29.345 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:29.345 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:29.345 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:29.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:18:29.346 00:18:29.346 --- 10.0.0.2 ping statistics --- 00:18:29.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.346 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:29.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:18:29.346 00:18:29.346 --- 10.0.0.1 ping statistics --- 00:18:29.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.346 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2785782 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2785782 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 2785782 ']' 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:29.346 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.346 [2024-11-20 06:28:48.767948] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:18:29.346 [2024-11-20 06:28:48.768014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.346 [2024-11-20 06:28:48.867289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:29.346 [2024-11-20 06:28:48.922968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.346 [2024-11-20 06:28:48.923024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.346 [2024-11-20 06:28:48.923033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.346 [2024-11-20 06:28:48.923040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.346 [2024-11-20 06:28:48.923047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.346 [2024-11-20 06:28:48.925147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.346 [2024-11-20 06:28:48.925311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.346 [2024-11-20 06:28:48.925538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.346 [2024-11-20 06:28:48.925540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.346 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:29.346 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:18:29.346 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.346 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:29.346 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.607 [2024-11-20 06:28:49.648126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.607 Malloc0 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.607 Malloc1 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.607 [2024-11-20 06:28:49.756238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.607 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:18:29.868 00:18:29.868 Discovery Log Number of Records 2, Generation counter 2 00:18:29.868 =====Discovery Log Entry 0====== 00:18:29.868 trtype: tcp 00:18:29.868 adrfam: ipv4 00:18:29.868 subtype: current discovery subsystem 00:18:29.868 treq: not required 00:18:29.868 portid: 0 00:18:29.868 trsvcid: 4420 00:18:29.868 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:29.868 traddr: 10.0.0.2 00:18:29.868 eflags: explicit discovery connections, duplicate discovery information 00:18:29.868 sectype: none 00:18:29.868 =====Discovery Log Entry 1====== 00:18:29.868 trtype: tcp 00:18:29.868 adrfam: ipv4 00:18:29.868 subtype: nvme subsystem 00:18:29.868 treq: not required 00:18:29.868 portid: 0 00:18:29.868 trsvcid: 4420 00:18:29.868 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:29.868 traddr: 10.0.0.2 00:18:29.868 eflags: none 00:18:29.868 sectype: none 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:29.868 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:31.252 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:31.252 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:18:31.252 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:31.252 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:18:31.252 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:18:31.252 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:33.796 /dev/nvme0n2 ]] 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:33.796 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:34.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:34.057 rmmod nvme_tcp 00:18:34.057 rmmod nvme_fabrics 00:18:34.057 rmmod nvme_keyring 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2785782 ']' 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2785782 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 2785782 ']' 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 2785782 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2785782 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2785782' 00:18:34.057 killing process with pid 2785782 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 2785782 00:18:34.057 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 2785782 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.318 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.862 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:36.862 00:18:36.862 real 0m15.517s 00:18:36.862 user 0m24.234s 00:18:36.862 sys 0m6.380s 00:18:36.862 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:36.862 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:36.862 ************************************ 00:18:36.862 END TEST nvmf_nvme_cli 00:18:36.862 ************************************ 00:18:36.862 06:28:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:36.862 06:28:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:36.863 ************************************ 00:18:36.863 START TEST nvmf_vfio_user 00:18:36.863 ************************************ 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:36.863 * Looking for test storage... 00:18:36.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.863 --rc genhtml_branch_coverage=1 00:18:36.863 --rc genhtml_function_coverage=1 00:18:36.863 --rc genhtml_legend=1 00:18:36.863 --rc geninfo_all_blocks=1 00:18:36.863 --rc geninfo_unexecuted_blocks=1 00:18:36.863 00:18:36.863 ' 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.863 --rc genhtml_branch_coverage=1 00:18:36.863 --rc genhtml_function_coverage=1 00:18:36.863 --rc genhtml_legend=1 00:18:36.863 --rc geninfo_all_blocks=1 00:18:36.863 --rc geninfo_unexecuted_blocks=1 00:18:36.863 00:18:36.863 ' 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.863 --rc genhtml_branch_coverage=1 00:18:36.863 --rc genhtml_function_coverage=1 00:18:36.863 --rc genhtml_legend=1 00:18:36.863 --rc geninfo_all_blocks=1 00:18:36.863 --rc geninfo_unexecuted_blocks=1 00:18:36.863 00:18:36.863 ' 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.863 --rc genhtml_branch_coverage=1 00:18:36.863 --rc genhtml_function_coverage=1 00:18:36.863 --rc genhtml_legend=1 00:18:36.863 --rc geninfo_all_blocks=1 00:18:36.863 --rc geninfo_unexecuted_blocks=1 00:18:36.863 00:18:36.863 ' 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.863 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:36.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2787594 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2787594' 00:18:36.864 Process pid: 2787594 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2787594 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2787594 ']' 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.864 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:36.864 [2024-11-20 06:28:56.910467] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:18:36.864 [2024-11-20 06:28:56.910539] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.864 [2024-11-20 06:28:56.997143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.864 [2024-11-20 06:28:57.031876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.864 [2024-11-20 06:28:57.031908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.864 [2024-11-20 06:28:57.031914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.864 [2024-11-20 06:28:57.031919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.864 [2024-11-20 06:28:57.031923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.864 [2024-11-20 06:28:57.033520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.864 [2024-11-20 06:28:57.033672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.864 [2024-11-20 06:28:57.033830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.864 [2024-11-20 06:28:57.033832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.804 06:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.804 06:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:18:37.804 06:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:38.745 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:38.745 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:38.745 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:38.745 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:38.745 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:38.745 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:39.005 Malloc1 00:18:39.005 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:39.267 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:39.267 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:39.528 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:39.528 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:39.528 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:39.788 Malloc2 00:18:39.789 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:39.789 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:40.049 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:40.314 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:40.314 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:40.314 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:40.314 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:40.314 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:40.315 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:40.315 [2024-11-20 06:29:00.410229] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:18:40.315 [2024-11-20 06:29:00.410268] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788288 ] 00:18:40.315 [2024-11-20 06:29:00.451432] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:40.315 [2024-11-20 06:29:00.459458] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:40.315 [2024-11-20 06:29:00.459476] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f737e9cf000 00:18:40.315 [2024-11-20 06:29:00.460462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:40.315 [2024-11-20 06:29:00.461466] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:40.315 [2024-11-20 06:29:00.462475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:40.315 [2024-11-20 06:29:00.463484] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:40.315 [2024-11-20 06:29:00.464493] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:40.315 [2024-11-20 06:29:00.465494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:40.315 [2024-11-20 06:29:00.466506] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:40.315 [2024-11-20 06:29:00.467504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:40.315 [2024-11-20 06:29:00.468515] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:40.315 [2024-11-20 06:29:00.468522] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f737e9c4000 00:18:40.315 [2024-11-20 06:29:00.469435] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:40.315 [2024-11-20 06:29:00.483436] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:40.315 [2024-11-20 06:29:00.483455] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:40.315 [2024-11-20 06:29:00.485626] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:40.315 [2024-11-20 06:29:00.485662] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:40.315 [2024-11-20 06:29:00.485726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:40.315 [2024-11-20 06:29:00.485740] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:40.315 [2024-11-20 06:29:00.485744] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:40.315 [2024-11-20 06:29:00.486626] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:40.315 [2024-11-20 06:29:00.486633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:40.315 [2024-11-20 06:29:00.486639] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:40.315 [2024-11-20 06:29:00.487629] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:40.315 [2024-11-20 06:29:00.487636] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:40.315 [2024-11-20 06:29:00.487641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:40.315 [2024-11-20 06:29:00.488637] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:40.315 [2024-11-20 06:29:00.488643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:40.315 [2024-11-20 06:29:00.489647] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:40.315 [2024-11-20 06:29:00.489653] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:40.315 [2024-11-20 06:29:00.489657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:40.315 [2024-11-20 06:29:00.489662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:40.315 [2024-11-20 06:29:00.489768] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:40.315 [2024-11-20 06:29:00.489772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:40.315 [2024-11-20 06:29:00.489776] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:40.315 [2024-11-20 06:29:00.490648] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:40.315 [2024-11-20 06:29:00.491656] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:40.315 [2024-11-20 06:29:00.492666] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:40.315 [2024-11-20 06:29:00.493665] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:40.315 [2024-11-20 06:29:00.493724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:40.315 [2024-11-20 06:29:00.494678] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:40.315 [2024-11-20 06:29:00.494683] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:40.315 [2024-11-20 06:29:00.494687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:40.315 [2024-11-20 06:29:00.494702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:40.315 [2024-11-20 06:29:00.494708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:40.315 [2024-11-20 06:29:00.494721] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:40.315 [2024-11-20 06:29:00.494725] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:40.315 [2024-11-20 06:29:00.494728] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.315 [2024-11-20 06:29:00.494739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:40.315 [2024-11-20 06:29:00.494778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:40.315 [2024-11-20 06:29:00.494787] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:40.315 [2024-11-20 06:29:00.494791] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:40.315 [2024-11-20 06:29:00.494794] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:40.315 [2024-11-20 06:29:00.494799] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:40.315 [2024-11-20 06:29:00.494804] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:40.315 [2024-11-20 06:29:00.494807] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:40.315 [2024-11-20 06:29:00.494811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:40.315 [2024-11-20 06:29:00.494818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:40.315 [2024-11-20 06:29:00.494825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:40.315 [2024-11-20 06:29:00.494836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:40.315 [2024-11-20 06:29:00.494845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.315 [2024-11-20 06:29:00.494851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.315 [2024-11-20 06:29:00.494857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.315 [2024-11-20 06:29:00.494863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.315 [2024-11-20 06:29:00.494867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:40.315 [2024-11-20 06:29:00.494872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:40.315 [2024-11-20 06:29:00.494878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:40.315 [2024-11-20 06:29:00.494890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:40.315 [2024-11-20 06:29:00.494896] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:40.315 [2024-11-20 06:29:00.494900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:40.315 [2024-11-20 06:29:00.494905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:40.315 [2024-11-20 06:29:00.494909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.494916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.494923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.494967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.494973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.494979] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:40.316 [2024-11-20 06:29:00.494984] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:40.316 [2024-11-20 06:29:00.494986] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.316 [2024-11-20 06:29:00.494991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.495002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.495009] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:40.316 [2024-11-20 06:29:00.495017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495028] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:40.316 [2024-11-20 06:29:00.495031] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:40.316 [2024-11-20 06:29:00.495033] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.316 [2024-11-20 06:29:00.495038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.495058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.495068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495078] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:40.316 [2024-11-20 06:29:00.495081] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:40.316 [2024-11-20 06:29:00.495084] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.316 [2024-11-20 06:29:00.495088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.495099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.495105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495131] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:40.316 [2024-11-20 06:29:00.495136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:40.316 [2024-11-20 06:29:00.495139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:40.316 [2024-11-20 06:29:00.495153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.495168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.495176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.495186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.495194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.495201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.495209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.495219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.495229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:40.316 [2024-11-20 06:29:00.495233] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:40.316 [2024-11-20 06:29:00.495235] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:40.316 [2024-11-20 06:29:00.495238] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:40.316 [2024-11-20 06:29:00.495240] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:40.316 [2024-11-20 06:29:00.495245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:40.316 [2024-11-20 06:29:00.495250] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:40.316 [2024-11-20 06:29:00.495253] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:40.316 [2024-11-20 06:29:00.495256] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.316 [2024-11-20 06:29:00.495260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.495265] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:40.316 [2024-11-20 06:29:00.495268] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:40.316 [2024-11-20 06:29:00.495270] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.316 [2024-11-20 06:29:00.495275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.495280] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:40.316 [2024-11-20 06:29:00.495283] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:40.316 [2024-11-20 06:29:00.495286] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.316 [2024-11-20 06:29:00.495290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:40.316 [2024-11-20 06:29:00.495297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.495305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.495313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:40.316 [2024-11-20 06:29:00.495318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:40.316 ===================================================== 00:18:40.316 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:40.316 ===================================================== 00:18:40.316 Controller Capabilities/Features 00:18:40.316 ================================ 00:18:40.316 Vendor ID: 4e58 00:18:40.316 Subsystem Vendor ID: 4e58 00:18:40.316 Serial Number: SPDK1 00:18:40.316 Model Number: SPDK bdev Controller 00:18:40.316 Firmware Version: 25.01 00:18:40.316 Recommended Arb Burst: 6 00:18:40.316 IEEE OUI Identifier: 8d 6b 50 00:18:40.316 Multi-path I/O 00:18:40.316 May have multiple subsystem ports: Yes 00:18:40.316 May have multiple controllers: Yes 00:18:40.316 Associated with SR-IOV VF: No 00:18:40.316 Max Data Transfer Size: 131072 00:18:40.316 Max Number of Namespaces: 32 00:18:40.316 Max Number of I/O Queues: 127 00:18:40.316 NVMe Specification Version (VS): 1.3 00:18:40.316 NVMe Specification Version (Identify): 1.3 00:18:40.316 Maximum Queue Entries: 256 00:18:40.316 Contiguous Queues Required: Yes 00:18:40.316 Arbitration Mechanisms Supported 00:18:40.316 Weighted Round Robin: Not Supported 00:18:40.316 Vendor Specific: Not Supported 00:18:40.316 Reset Timeout: 15000 ms 00:18:40.316 Doorbell Stride: 4 bytes 00:18:40.316 NVM Subsystem Reset: Not Supported 00:18:40.316 Command Sets Supported 00:18:40.316 NVM Command Set: Supported 00:18:40.316 Boot Partition: Not Supported 00:18:40.316 Memory Page Size Minimum: 4096 bytes 00:18:40.316 Memory Page Size Maximum: 4096 bytes 00:18:40.316 Persistent Memory Region: Not Supported 00:18:40.316 Optional Asynchronous Events Supported 00:18:40.316 Namespace Attribute Notices: Supported 00:18:40.316 Firmware Activation Notices: Not Supported 00:18:40.316 ANA Change Notices: Not Supported 00:18:40.316 PLE Aggregate Log Change Notices: Not Supported 00:18:40.316 LBA Status Info Alert Notices: Not Supported 00:18:40.316 EGE Aggregate Log Change Notices: Not Supported 00:18:40.316 Normal NVM Subsystem Shutdown event: Not Supported 00:18:40.316 Zone Descriptor Change Notices: Not Supported 00:18:40.317 Discovery Log Change Notices: Not Supported 00:18:40.317 Controller Attributes 00:18:40.317 128-bit Host Identifier: Supported 00:18:40.317 Non-Operational Permissive Mode: Not Supported 00:18:40.317 NVM Sets: Not Supported 00:18:40.317 Read Recovery Levels: Not Supported 00:18:40.317 Endurance Groups: Not Supported 00:18:40.317 Predictable Latency Mode: Not Supported 00:18:40.317 Traffic Based Keep ALive: Not Supported 00:18:40.317 Namespace Granularity: Not Supported 00:18:40.317 SQ Associations: Not Supported 00:18:40.317 UUID List: Not Supported 00:18:40.317 Multi-Domain Subsystem: Not Supported 00:18:40.317 Fixed Capacity Management: Not Supported 00:18:40.317 Variable Capacity Management: Not Supported 00:18:40.317 Delete Endurance Group: Not Supported 00:18:40.317 Delete NVM Set: Not Supported 00:18:40.317 Extended LBA Formats Supported: Not Supported 00:18:40.317 Flexible Data Placement Supported: Not Supported 00:18:40.317 00:18:40.317 Controller Memory Buffer Support 00:18:40.317 ================================ 00:18:40.317 Supported: No 00:18:40.317 00:18:40.317 Persistent Memory Region Support 00:18:40.317 ================================ 00:18:40.317 Supported: No 00:18:40.317 00:18:40.317 Admin Command Set Attributes 00:18:40.317 ============================ 00:18:40.317 Security Send/Receive: Not Supported 00:18:40.317 Format NVM: Not Supported 00:18:40.317 Firmware Activate/Download: Not Supported 00:18:40.317 Namespace Management: Not Supported 00:18:40.317 Device Self-Test: Not Supported 00:18:40.317 Directives: Not Supported 00:18:40.317 NVMe-MI: Not Supported 00:18:40.317 Virtualization Management: Not Supported 00:18:40.317 Doorbell Buffer Config: Not Supported 00:18:40.317 Get LBA Status Capability: Not Supported 00:18:40.317 Command & Feature Lockdown Capability: Not Supported 00:18:40.317 Abort Command Limit: 4 00:18:40.317 Async Event Request Limit: 4 00:18:40.317 Number of Firmware Slots: N/A 00:18:40.317 Firmware Slot 1 Read-Only: N/A 00:18:40.317 Firmware Activation Without Reset: N/A 00:18:40.317 Multiple Update Detection Support: N/A 00:18:40.317 Firmware Update Granularity: No Information Provided 00:18:40.317 Per-Namespace SMART Log: No 00:18:40.317 Asymmetric Namespace Access Log Page: Not Supported 00:18:40.317 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:40.317 Command Effects Log Page: Supported 00:18:40.317 Get Log Page Extended Data: Supported 00:18:40.317 Telemetry Log Pages: Not Supported 00:18:40.317 Persistent Event Log Pages: Not Supported 00:18:40.317 Supported Log Pages Log Page: May Support 00:18:40.317 Commands Supported & Effects Log Page: Not Supported 00:18:40.317 Feature Identifiers & Effects Log Page:May Support 00:18:40.317 NVMe-MI Commands & Effects Log Page: May Support 00:18:40.317 Data Area 4 for Telemetry Log: Not Supported 00:18:40.317 Error Log Page Entries Supported: 128 00:18:40.317 Keep Alive: Supported 00:18:40.317 Keep Alive Granularity: 10000 ms 00:18:40.317 00:18:40.317 NVM Command Set Attributes 00:18:40.317 ========================== 00:18:40.317 Submission Queue Entry Size 00:18:40.317 Max: 64 00:18:40.317 Min: 64 00:18:40.317 Completion Queue Entry Size 00:18:40.317 Max: 16 00:18:40.317 Min: 16 00:18:40.317 Number of Namespaces: 32 00:18:40.317 Compare Command: Supported 00:18:40.317 Write Uncorrectable Command: Not Supported 00:18:40.317 Dataset Management Command: Supported 00:18:40.317 Write Zeroes Command: Supported 00:18:40.317 Set Features Save Field: Not Supported 00:18:40.317 Reservations: Not Supported 00:18:40.317 Timestamp: Not Supported 00:18:40.317 Copy: Supported 00:18:40.317 Volatile Write Cache: Present 00:18:40.317 Atomic Write Unit (Normal): 1 00:18:40.317 Atomic Write Unit (PFail): 1 00:18:40.317 Atomic Compare & Write Unit: 1 00:18:40.317 Fused Compare & Write: Supported 00:18:40.317 Scatter-Gather List 00:18:40.317 SGL Command Set: Supported (Dword aligned) 00:18:40.317 SGL Keyed: Not Supported 00:18:40.317 SGL Bit Bucket Descriptor: Not Supported 00:18:40.317 SGL Metadata Pointer: Not Supported 00:18:40.317 Oversized SGL: Not Supported 00:18:40.317 SGL Metadata Address: Not Supported 00:18:40.317 SGL Offset: Not Supported 00:18:40.317 Transport SGL Data Block: Not Supported 00:18:40.317 Replay Protected Memory Block: Not Supported 00:18:40.317 00:18:40.317 Firmware Slot Information 00:18:40.317 ========================= 00:18:40.317 Active slot: 1 00:18:40.317 Slot 1 Firmware Revision: 25.01 00:18:40.317 00:18:40.317 00:18:40.317 Commands Supported and Effects 00:18:40.317 ============================== 00:18:40.317 Admin Commands 00:18:40.317 -------------- 00:18:40.317 Get Log Page (02h): Supported 00:18:40.317 Identify (06h): Supported 00:18:40.317 Abort (08h): Supported 00:18:40.317 Set Features (09h): Supported 00:18:40.317 Get Features (0Ah): Supported 00:18:40.317 Asynchronous Event Request (0Ch): Supported 00:18:40.317 Keep Alive (18h): Supported 00:18:40.317 I/O Commands 00:18:40.317 ------------ 00:18:40.317 Flush (00h): Supported LBA-Change 00:18:40.317 Write (01h): Supported LBA-Change 00:18:40.317 Read (02h): Supported 00:18:40.317 Compare (05h): Supported 00:18:40.317 Write Zeroes (08h): Supported LBA-Change 00:18:40.317 Dataset Management (09h): Supported LBA-Change 00:18:40.317 Copy (19h): Supported LBA-Change 00:18:40.317 00:18:40.317 Error Log 00:18:40.317 ========= 00:18:40.317 00:18:40.317 Arbitration 00:18:40.317 =========== 00:18:40.317 Arbitration Burst: 1 00:18:40.317 00:18:40.317 Power Management 00:18:40.317 ================ 00:18:40.317 Number of Power States: 1 00:18:40.317 Current Power State: Power State #0 00:18:40.317 Power State #0: 00:18:40.317 Max Power: 0.00 W 00:18:40.317 Non-Operational State: Operational 00:18:40.317 Entry Latency: Not Reported 00:18:40.317 Exit Latency: Not Reported 00:18:40.317 Relative Read Throughput: 0 00:18:40.317 Relative Read Latency: 0 00:18:40.317 Relative Write Throughput: 0 00:18:40.317 Relative Write Latency: 0 00:18:40.317 Idle Power: Not Reported 00:18:40.317 Active Power: Not Reported 00:18:40.317 Non-Operational Permissive Mode: Not Supported 00:18:40.317 00:18:40.317 Health Information 00:18:40.317 ================== 00:18:40.317 Critical Warnings: 00:18:40.317 Available Spare Space: OK 00:18:40.317 Temperature: OK 00:18:40.317 Device Reliability: OK 00:18:40.317 Read Only: No 00:18:40.317 Volatile Memory Backup: OK 00:18:40.317 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:40.317 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:40.317 Available Spare: 0% 00:18:40.317 Available Sp[2024-11-20 06:29:00.495392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:40.317 [2024-11-20 06:29:00.495400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:40.317 [2024-11-20 06:29:00.495420] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:40.317 [2024-11-20 06:29:00.495428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.317 [2024-11-20 06:29:00.495432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.317 [2024-11-20 06:29:00.495437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.317 [2024-11-20 06:29:00.495441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.317 [2024-11-20 06:29:00.498164] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:40.317 [2024-11-20 06:29:00.498173] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:40.317 [2024-11-20 06:29:00.498694] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:40.317 [2024-11-20 06:29:00.498744] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:40.317 [2024-11-20 06:29:00.498749] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:40.317 [2024-11-20 06:29:00.499697] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:40.317 [2024-11-20 06:29:00.499706] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:40.317 [2024-11-20 06:29:00.499761] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:40.317 [2024-11-20 06:29:00.500723] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:40.317 are Threshold: 0% 00:18:40.317 Life Percentage Used: 0% 00:18:40.317 Data Units Read: 0 00:18:40.317 Data Units Written: 0 00:18:40.317 Host Read Commands: 0 00:18:40.317 Host Write Commands: 0 00:18:40.317 Controller Busy Time: 0 minutes 00:18:40.318 Power Cycles: 0 00:18:40.318 Power On Hours: 0 hours 00:18:40.318 Unsafe Shutdowns: 0 00:18:40.318 Unrecoverable Media Errors: 0 00:18:40.318 Lifetime Error Log Entries: 0 00:18:40.318 Warning Temperature Time: 0 minutes 00:18:40.318 Critical Temperature Time: 0 minutes 00:18:40.318 00:18:40.318 Number of Queues 00:18:40.318 ================ 00:18:40.318 Number of I/O Submission Queues: 127 00:18:40.318 Number of I/O Completion Queues: 127 00:18:40.318 00:18:40.318 Active Namespaces 00:18:40.318 ================= 00:18:40.318 Namespace ID:1 00:18:40.318 Error Recovery Timeout: Unlimited 00:18:40.318 Command Set Identifier: NVM (00h) 00:18:40.318 Deallocate: Supported 00:18:40.318 Deallocated/Unwritten Error: Not Supported 00:18:40.318 Deallocated Read Value: Unknown 00:18:40.318 Deallocate in Write Zeroes: Not Supported 00:18:40.318 Deallocated Guard Field: 0xFFFF 00:18:40.318 Flush: Supported 00:18:40.318 Reservation: Supported 00:18:40.318 Namespace Sharing Capabilities: Multiple Controllers 00:18:40.318 Size (in LBAs): 131072 (0GiB) 00:18:40.318 Capacity (in LBAs): 131072 (0GiB) 00:18:40.318 Utilization (in LBAs): 131072 (0GiB) 00:18:40.318 NGUID: A25F654EC97A4F4C8DBA45A12BF5CACF 00:18:40.318 UUID: a25f654e-c97a-4f4c-8dba-45a12bf5cacf 00:18:40.318 Thin Provisioning: Not Supported 00:18:40.318 Per-NS Atomic Units: Yes 00:18:40.318 Atomic Boundary Size (Normal): 0 00:18:40.318 Atomic Boundary Size (PFail): 0 00:18:40.318 Atomic Boundary Offset: 0 00:18:40.318 Maximum Single Source Range Length: 65535 00:18:40.318 Maximum Copy Length: 65535 00:18:40.318 Maximum Source Range Count: 1 00:18:40.318 NGUID/EUI64 Never Reused: No 00:18:40.318 Namespace Write Protected: No 00:18:40.318 Number of LBA Formats: 1 00:18:40.318 Current LBA Format: LBA Format #00 00:18:40.318 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:40.318 00:18:40.318 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:40.579 [2024-11-20 06:29:00.688824] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:45.875 Initializing NVMe Controllers 00:18:45.875 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:45.875 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:45.875 Initialization complete. Launching workers. 00:18:45.875 ======================================================== 00:18:45.875 Latency(us) 00:18:45.875 Device Information : IOPS MiB/s Average min max 00:18:45.875 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40099.00 156.64 3191.93 837.77 6960.52 00:18:45.875 ======================================================== 00:18:45.875 Total : 40099.00 156.64 3191.93 837.77 6960.52 00:18:45.875 00:18:45.875 [2024-11-20 06:29:05.708794] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:45.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:45.875 [2024-11-20 06:29:05.897637] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:51.164 Initializing NVMe Controllers 00:18:51.164 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:51.164 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:51.164 Initialization complete. Launching workers. 00:18:51.164 ======================================================== 00:18:51.164 Latency(us) 00:18:51.164 Device Information : IOPS MiB/s Average min max 00:18:51.164 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7972.56 4988.35 9977.75 00:18:51.164 ======================================================== 00:18:51.164 Total : 16076.80 62.80 7972.56 4988.35 9977.75 00:18:51.164 00:18:51.164 [2024-11-20 06:29:10.934152] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:51.164 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:51.164 [2024-11-20 06:29:11.148040] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:56.453 [2024-11-20 06:29:16.236403] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:56.453 Initializing NVMe Controllers 00:18:56.453 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:56.453 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:56.453 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:56.453 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:56.453 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:56.453 Initialization complete. Launching workers. 00:18:56.453 Starting thread on core 2 00:18:56.453 Starting thread on core 3 00:18:56.453 Starting thread on core 1 00:18:56.453 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:56.453 [2024-11-20 06:29:16.485489] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:59.760 [2024-11-20 06:29:19.547149] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:59.760 Initializing NVMe Controllers 00:18:59.760 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:59.760 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:59.760 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:59.760 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:59.760 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:59.760 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:59.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:59.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:59.761 Initialization complete. Launching workers. 00:18:59.761 Starting thread on core 1 with urgent priority queue 00:18:59.761 Starting thread on core 2 with urgent priority queue 00:18:59.761 Starting thread on core 3 with urgent priority queue 00:18:59.761 Starting thread on core 0 with urgent priority queue 00:18:59.761 SPDK bdev Controller (SPDK1 ) core 0: 17153.33 IO/s 5.83 secs/100000 ios 00:18:59.761 SPDK bdev Controller (SPDK1 ) core 1: 8063.67 IO/s 12.40 secs/100000 ios 00:18:59.761 SPDK bdev Controller (SPDK1 ) core 2: 15338.00 IO/s 6.52 secs/100000 ios 00:18:59.761 SPDK bdev Controller (SPDK1 ) core 3: 7601.67 IO/s 13.16 secs/100000 ios 00:18:59.761 ======================================================== 00:18:59.761 00:18:59.761 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:59.761 [2024-11-20 06:29:19.789592] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:59.761 Initializing NVMe Controllers 00:18:59.761 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:59.761 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:59.761 Namespace ID: 1 size: 0GB 00:18:59.761 Initialization complete. 00:18:59.761 INFO: using host memory buffer for IO 00:18:59.761 Hello world! 00:18:59.761 [2024-11-20 06:29:19.825811] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:59.761 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:00.022 [2024-11-20 06:29:20.060761] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:00.965 Initializing NVMe Controllers 00:19:00.965 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:00.965 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:00.965 Initialization complete. Launching workers. 00:19:00.965 submit (in ns) avg, min, max = 6509.9, 2817.5, 4004640.0 00:19:00.965 complete (in ns) avg, min, max = 16868.6, 1631.7, 5991218.3 00:19:00.965 00:19:00.965 Submit histogram 00:19:00.965 ================ 00:19:00.965 Range in us Cumulative Count 00:19:00.965 2.813 - 2.827: 0.3796% ( 77) 00:19:00.965 2.827 - 2.840: 1.3656% ( 200) 00:19:00.965 2.840 - 2.853: 3.5200% ( 437) 00:19:00.965 2.853 - 2.867: 8.5141% ( 1013) 00:19:00.965 2.867 - 2.880: 13.5723% ( 1026) 00:19:00.965 2.880 - 2.893: 19.1580% ( 1133) 00:19:00.965 2.893 - 2.907: 25.2711% ( 1240) 00:19:00.965 2.907 - 2.920: 31.0688% ( 1176) 00:19:00.965 2.920 - 2.933: 36.7038% ( 1143) 00:19:00.965 2.933 - 2.947: 41.8211% ( 1038) 00:19:00.965 2.947 - 2.960: 46.9237% ( 1035) 00:19:00.965 2.960 - 2.973: 53.5742% ( 1349) 00:19:00.965 2.973 - 2.987: 62.8969% ( 1891) 00:19:00.965 2.987 - 3.000: 71.8793% ( 1822) 00:19:00.965 3.000 - 3.013: 79.9842% ( 1644) 00:19:00.965 3.013 - 3.027: 86.7925% ( 1381) 00:19:00.965 3.027 - 3.040: 91.7275% ( 1001) 00:19:00.965 3.040 - 3.053: 95.3954% ( 744) 00:19:00.965 3.053 - 3.067: 97.4709% ( 421) 00:19:00.965 3.067 - 3.080: 98.6590% ( 241) 00:19:00.965 3.080 - 3.093: 99.0682% ( 83) 00:19:00.965 3.093 - 3.107: 99.3739% ( 62) 00:19:00.965 3.107 - 3.120: 99.5070% ( 27) 00:19:00.965 3.120 - 3.133: 99.5711% ( 13) 00:19:00.965 3.133 - 3.147: 99.6056% ( 7) 00:19:00.965 3.147 - 3.160: 99.6105% ( 1) 00:19:00.965 3.160 - 3.173: 99.6155% ( 1) 00:19:00.965 3.173 - 3.187: 99.6204% ( 1) 00:19:00.965 3.467 - 3.493: 99.6253% ( 1) 00:19:00.965 3.627 - 3.653: 99.6303% ( 1) 00:19:00.965 3.707 - 3.733: 99.6352% ( 1) 00:19:00.965 4.133 - 4.160: 99.6401% ( 1) 00:19:00.965 4.293 - 4.320: 99.6450% ( 1) 00:19:00.965 4.373 - 4.400: 99.6500% ( 1) 00:19:00.965 4.560 - 4.587: 99.6549% ( 1) 00:19:00.965 4.613 - 4.640: 99.6598% ( 1) 00:19:00.965 4.640 - 4.667: 99.6648% ( 1) 00:19:00.965 4.667 - 4.693: 99.6697% ( 1) 00:19:00.965 4.773 - 4.800: 99.6746% ( 1) 00:19:00.965 4.853 - 4.880: 99.6796% ( 1) 00:19:00.965 4.880 - 4.907: 99.6894% ( 2) 00:19:00.965 4.960 - 4.987: 99.6943% ( 1) 00:19:00.965 5.013 - 5.040: 99.7042% ( 2) 00:19:00.965 5.040 - 5.067: 99.7141% ( 2) 00:19:00.965 5.067 - 5.093: 99.7239% ( 2) 00:19:00.965 5.093 - 5.120: 99.7289% ( 1) 00:19:00.965 5.147 - 5.173: 99.7338% ( 1) 00:19:00.965 5.200 - 5.227: 99.7387% ( 1) 00:19:00.965 5.253 - 5.280: 99.7486% ( 2) 00:19:00.965 5.280 - 5.307: 99.7535% ( 1) 00:19:00.965 5.333 - 5.360: 99.7584% ( 1) 00:19:00.965 5.387 - 5.413: 99.7634% ( 1) 00:19:00.965 5.493 - 5.520: 99.7683% ( 1) 00:19:00.965 5.547 - 5.573: 99.7732% ( 1) 00:19:00.965 5.627 - 5.653: 99.7782% ( 1) 00:19:00.965 5.760 - 5.787: 99.7880% ( 2) 00:19:00.965 5.787 - 5.813: 99.7929% ( 1) 00:19:00.965 5.893 - 5.920: 99.8028% ( 2) 00:19:00.965 5.973 - 6.000: 99.8077% ( 1) 00:19:00.965 6.027 - 6.053: 99.8127% ( 1) 00:19:00.965 6.080 - 6.107: 99.8176% ( 1) 00:19:00.965 6.107 - 6.133: 99.8225% ( 1) 00:19:00.965 6.133 - 6.160: 99.8324% ( 2) 00:19:00.965 6.160 - 6.187: 99.8373% ( 1) 00:19:00.965 6.213 - 6.240: 99.8472% ( 2) 00:19:00.965 6.240 - 6.267: 99.8570% ( 2) 00:19:00.965 6.267 - 6.293: 99.8620% ( 1) 00:19:00.965 6.347 - 6.373: 99.8669% ( 1) 00:19:00.965 6.373 - 6.400: 99.8768% ( 2) 00:19:00.965 6.400 - 6.427: 99.8866% ( 2) 00:19:00.965 6.453 - 6.480: 99.8915% ( 1) 00:19:00.965 6.720 - 6.747: 99.8965% ( 1) 00:19:00.965 6.987 - 7.040: 99.9014% ( 1) 00:19:00.965 7.840 - 7.893: 99.9063% ( 1) 00:19:00.965 11.893 - 11.947: 99.9113% ( 1) 00:19:00.965 3986.773 - 4014.080: 100.0000% ( 18) 00:19:00.965 00:19:00.965 Complete histogram 00:19:00.965 ================== 00:19:00.965 Range in us Cumulative Count 00:19:00.965 1.627 - 1.633: 0.0049% ( 1) 00:19:00.965 1.640 - [2024-11-20 06:29:21.079397] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:00.965 1.647: 0.3106% ( 62) 00:19:00.965 1.647 - 1.653: 0.7740% ( 94) 00:19:00.965 1.653 - 1.660: 0.8282% ( 11) 00:19:00.965 1.660 - 1.667: 0.8923% ( 13) 00:19:00.965 1.667 - 1.673: 0.9367% ( 9) 00:19:00.965 1.673 - 1.680: 0.9613% ( 5) 00:19:00.965 1.680 - 1.687: 0.9811% ( 4) 00:19:00.965 1.687 - 1.693: 0.9860% ( 1) 00:19:00.965 1.693 - 1.700: 1.0895% ( 21) 00:19:00.965 1.700 - 1.707: 17.0578% ( 3239) 00:19:00.965 1.707 - 1.720: 54.9645% ( 7689) 00:19:00.965 1.720 - 1.733: 73.8069% ( 3822) 00:19:00.965 1.733 - 1.747: 81.8872% ( 1639) 00:19:00.965 1.747 - 1.760: 83.3662% ( 300) 00:19:00.965 1.760 - 1.773: 87.1869% ( 775) 00:19:00.965 1.773 - 1.787: 92.9698% ( 1173) 00:19:00.965 1.787 - 1.800: 97.0420% ( 826) 00:19:00.965 1.800 - 1.813: 98.8119% ( 359) 00:19:00.965 1.813 - 1.827: 99.3098% ( 101) 00:19:00.965 1.827 - 1.840: 99.4380% ( 26) 00:19:00.965 1.840 - 1.853: 99.4429% ( 1) 00:19:00.965 1.853 - 1.867: 99.4478% ( 1) 00:19:00.965 3.293 - 3.307: 99.4528% ( 1) 00:19:00.965 3.813 - 3.840: 99.4626% ( 2) 00:19:00.965 3.867 - 3.893: 99.4676% ( 1) 00:19:00.965 3.920 - 3.947: 99.4725% ( 1) 00:19:00.965 3.973 - 4.000: 99.4774% ( 1) 00:19:00.965 4.000 - 4.027: 99.4824% ( 1) 00:19:00.966 4.080 - 4.107: 99.4873% ( 1) 00:19:00.966 4.133 - 4.160: 99.4971% ( 2) 00:19:00.966 4.213 - 4.240: 99.5021% ( 1) 00:19:00.966 4.267 - 4.293: 99.5218% ( 4) 00:19:00.966 4.320 - 4.347: 99.5267% ( 1) 00:19:00.966 4.507 - 4.533: 99.5317% ( 1) 00:19:00.966 4.560 - 4.587: 99.5415% ( 2) 00:19:00.966 4.613 - 4.640: 99.5464% ( 1) 00:19:00.966 4.640 - 4.667: 99.5514% ( 1) 00:19:00.966 4.693 - 4.720: 99.5563% ( 1) 00:19:00.966 4.720 - 4.747: 99.5612% ( 1) 00:19:00.966 4.773 - 4.800: 99.5662% ( 1) 00:19:00.966 4.800 - 4.827: 99.5711% ( 1) 00:19:00.966 4.853 - 4.880: 99.5760% ( 1) 00:19:00.966 4.907 - 4.933: 99.5810% ( 1) 00:19:00.966 4.960 - 4.987: 99.5908% ( 2) 00:19:00.966 5.173 - 5.200: 99.6007% ( 2) 00:19:00.966 5.253 - 5.280: 99.6056% ( 1) 00:19:00.966 5.387 - 5.413: 99.6105% ( 1) 00:19:00.966 5.600 - 5.627: 99.6155% ( 1) 00:19:00.966 11.307 - 11.360: 99.6204% ( 1) 00:19:00.966 2020.693 - 2034.347: 99.6253% ( 1) 00:19:00.966 2075.307 - 2088.960: 99.6303% ( 1) 00:19:00.966 3372.373 - 3386.027: 99.6352% ( 1) 00:19:00.966 3986.773 - 4014.080: 99.9901% ( 72) 00:19:00.966 5980.160 - 6007.467: 100.0000% ( 2) 00:19:00.966 00:19:00.966 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:00.966 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:00.966 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:00.966 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:00.966 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:01.227 [ 00:19:01.227 { 00:19:01.227 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:01.227 "subtype": "Discovery", 00:19:01.227 "listen_addresses": [], 00:19:01.227 "allow_any_host": true, 00:19:01.227 "hosts": [] 00:19:01.227 }, 00:19:01.227 { 00:19:01.227 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:01.227 "subtype": "NVMe", 00:19:01.227 "listen_addresses": [ 00:19:01.227 { 00:19:01.227 "trtype": "VFIOUSER", 00:19:01.227 "adrfam": "IPv4", 00:19:01.227 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:01.227 "trsvcid": "0" 00:19:01.227 } 00:19:01.227 ], 00:19:01.227 "allow_any_host": true, 00:19:01.227 "hosts": [], 00:19:01.227 "serial_number": "SPDK1", 00:19:01.227 "model_number": "SPDK bdev Controller", 00:19:01.227 "max_namespaces": 32, 00:19:01.227 "min_cntlid": 1, 00:19:01.227 "max_cntlid": 65519, 00:19:01.227 "namespaces": [ 00:19:01.227 { 00:19:01.227 "nsid": 1, 00:19:01.227 "bdev_name": "Malloc1", 00:19:01.227 "name": "Malloc1", 00:19:01.227 "nguid": "A25F654EC97A4F4C8DBA45A12BF5CACF", 00:19:01.227 "uuid": "a25f654e-c97a-4f4c-8dba-45a12bf5cacf" 00:19:01.227 } 00:19:01.227 ] 00:19:01.227 }, 00:19:01.227 { 00:19:01.227 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:01.227 "subtype": "NVMe", 00:19:01.227 "listen_addresses": [ 00:19:01.227 { 00:19:01.227 "trtype": "VFIOUSER", 00:19:01.227 "adrfam": "IPv4", 00:19:01.227 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:01.227 "trsvcid": "0" 00:19:01.227 } 00:19:01.227 ], 00:19:01.227 "allow_any_host": true, 00:19:01.227 "hosts": [], 00:19:01.227 "serial_number": "SPDK2", 00:19:01.227 "model_number": "SPDK bdev Controller", 00:19:01.227 "max_namespaces": 32, 00:19:01.227 "min_cntlid": 1, 00:19:01.227 "max_cntlid": 65519, 00:19:01.227 "namespaces": [ 00:19:01.227 { 00:19:01.227 "nsid": 1, 00:19:01.227 "bdev_name": "Malloc2", 00:19:01.227 "name": "Malloc2", 00:19:01.227 "nguid": "268BAD6DDFAE45AC814C391A53B761F8", 00:19:01.227 "uuid": "268bad6d-dfae-45ac-814c-391a53b761f8" 00:19:01.227 } 00:19:01.227 ] 00:19:01.227 } 00:19:01.227 ] 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2792319 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:01.227 [2024-11-20 06:29:21.453521] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:01.227 Malloc3 00:19:01.227 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:01.488 [2024-11-20 06:29:21.649907] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:01.488 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:01.488 Asynchronous Event Request test 00:19:01.488 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:01.488 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:01.488 Registering asynchronous event callbacks... 00:19:01.488 Starting namespace attribute notice tests for all controllers... 00:19:01.488 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:01.488 aer_cb - Changed Namespace 00:19:01.488 Cleaning up... 00:19:01.751 [ 00:19:01.751 { 00:19:01.751 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:01.751 "subtype": "Discovery", 00:19:01.751 "listen_addresses": [], 00:19:01.751 "allow_any_host": true, 00:19:01.751 "hosts": [] 00:19:01.751 }, 00:19:01.751 { 00:19:01.751 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:01.751 "subtype": "NVMe", 00:19:01.751 "listen_addresses": [ 00:19:01.751 { 00:19:01.751 "trtype": "VFIOUSER", 00:19:01.751 "adrfam": "IPv4", 00:19:01.751 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:01.751 "trsvcid": "0" 00:19:01.751 } 00:19:01.751 ], 00:19:01.751 "allow_any_host": true, 00:19:01.751 "hosts": [], 00:19:01.751 "serial_number": "SPDK1", 00:19:01.751 "model_number": "SPDK bdev Controller", 00:19:01.751 "max_namespaces": 32, 00:19:01.751 "min_cntlid": 1, 00:19:01.751 "max_cntlid": 65519, 00:19:01.751 "namespaces": [ 00:19:01.751 { 00:19:01.751 "nsid": 1, 00:19:01.751 "bdev_name": "Malloc1", 00:19:01.751 "name": "Malloc1", 00:19:01.751 "nguid": "A25F654EC97A4F4C8DBA45A12BF5CACF", 00:19:01.751 "uuid": "a25f654e-c97a-4f4c-8dba-45a12bf5cacf" 00:19:01.751 }, 00:19:01.751 { 00:19:01.751 "nsid": 2, 00:19:01.751 "bdev_name": "Malloc3", 00:19:01.751 "name": "Malloc3", 00:19:01.751 "nguid": "F9CB00064BAE44F79E69D6EF22760C9F", 00:19:01.751 "uuid": "f9cb0006-4bae-44f7-9e69-d6ef22760c9f" 00:19:01.751 } 00:19:01.751 ] 00:19:01.751 }, 00:19:01.751 { 00:19:01.751 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:01.751 "subtype": "NVMe", 00:19:01.751 "listen_addresses": [ 00:19:01.751 { 00:19:01.751 "trtype": "VFIOUSER", 00:19:01.751 "adrfam": "IPv4", 00:19:01.751 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:01.751 "trsvcid": "0" 00:19:01.751 } 00:19:01.751 ], 00:19:01.751 "allow_any_host": true, 00:19:01.751 "hosts": [], 00:19:01.751 "serial_number": "SPDK2", 00:19:01.751 "model_number": "SPDK bdev Controller", 00:19:01.751 "max_namespaces": 32, 00:19:01.751 "min_cntlid": 1, 00:19:01.751 "max_cntlid": 65519, 00:19:01.751 "namespaces": [ 00:19:01.751 { 00:19:01.751 "nsid": 1, 00:19:01.751 "bdev_name": "Malloc2", 00:19:01.751 "name": "Malloc2", 00:19:01.751 "nguid": "268BAD6DDFAE45AC814C391A53B761F8", 00:19:01.751 "uuid": "268bad6d-dfae-45ac-814c-391a53b761f8" 00:19:01.751 } 00:19:01.751 ] 00:19:01.751 } 00:19:01.751 ] 00:19:01.751 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2792319 00:19:01.751 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:01.751 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:01.751 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:01.751 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:01.752 [2024-11-20 06:29:21.868318] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:19:01.752 [2024-11-20 06:29:21.868360] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792325 ] 00:19:01.752 [2024-11-20 06:29:21.908174] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:01.752 [2024-11-20 06:29:21.916323] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:01.752 [2024-11-20 06:29:21.916341] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f41e39c6000 00:19:01.752 [2024-11-20 06:29:21.917322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:01.752 [2024-11-20 06:29:21.918331] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:01.752 [2024-11-20 06:29:21.919335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:01.752 [2024-11-20 06:29:21.920338] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:01.752 [2024-11-20 06:29:21.921344] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:01.752 [2024-11-20 06:29:21.922349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:01.752 [2024-11-20 06:29:21.923355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:01.752 [2024-11-20 06:29:21.924359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:01.752 [2024-11-20 06:29:21.925366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:01.752 [2024-11-20 06:29:21.925374] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f41e39bb000 00:19:01.752 [2024-11-20 06:29:21.926287] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:01.752 [2024-11-20 06:29:21.940434] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:01.752 [2024-11-20 06:29:21.940453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:01.752 [2024-11-20 06:29:21.942507] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:01.752 [2024-11-20 06:29:21.942544] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:01.752 [2024-11-20 06:29:21.942604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:01.752 [2024-11-20 06:29:21.942614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:01.752 [2024-11-20 06:29:21.942618] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:01.752 [2024-11-20 06:29:21.943515] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:01.752 [2024-11-20 06:29:21.943522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:01.752 [2024-11-20 06:29:21.943527] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:01.752 [2024-11-20 06:29:21.944519] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:01.752 [2024-11-20 06:29:21.944526] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:01.752 [2024-11-20 06:29:21.944531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:01.752 [2024-11-20 06:29:21.945528] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:01.752 [2024-11-20 06:29:21.945534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:01.752 [2024-11-20 06:29:21.946530] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:01.752 [2024-11-20 06:29:21.946536] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:01.752 [2024-11-20 06:29:21.946540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:01.752 [2024-11-20 06:29:21.946545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:01.752 [2024-11-20 06:29:21.946651] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:01.752 [2024-11-20 06:29:21.946654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:01.752 [2024-11-20 06:29:21.946658] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:01.752 [2024-11-20 06:29:21.947537] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:01.752 [2024-11-20 06:29:21.948542] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:01.752 [2024-11-20 06:29:21.949546] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:01.752 [2024-11-20 06:29:21.950548] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:01.752 [2024-11-20 06:29:21.950577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:01.752 [2024-11-20 06:29:21.951556] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:01.752 [2024-11-20 06:29:21.951563] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:01.752 [2024-11-20 06:29:21.951566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:01.752 [2024-11-20 06:29:21.951581] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:01.752 [2024-11-20 06:29:21.951586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:01.752 [2024-11-20 06:29:21.951595] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:01.752 [2024-11-20 06:29:21.951598] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:01.752 [2024-11-20 06:29:21.951601] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.752 [2024-11-20 06:29:21.951611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:01.752 [2024-11-20 06:29:21.958165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:01.752 [2024-11-20 06:29:21.958174] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:01.752 [2024-11-20 06:29:21.958177] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:01.752 [2024-11-20 06:29:21.958180] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:01.752 [2024-11-20 06:29:21.958184] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:01.752 [2024-11-20 06:29:21.958189] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:01.752 [2024-11-20 06:29:21.958192] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:01.752 [2024-11-20 06:29:21.958196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:01.752 [2024-11-20 06:29:21.958202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:01.752 [2024-11-20 06:29:21.958210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:01.752 [2024-11-20 06:29:21.966162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:01.752 [2024-11-20 06:29:21.966172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.752 [2024-11-20 06:29:21.966178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.752 [2024-11-20 06:29:21.966186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.752 [2024-11-20 06:29:21.966192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.752 [2024-11-20 06:29:21.966195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:01.752 [2024-11-20 06:29:21.966200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:01.752 [2024-11-20 06:29:21.966206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:01.752 [2024-11-20 06:29:21.974162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:01.752 [2024-11-20 06:29:21.974169] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:01.752 [2024-11-20 06:29:21.974173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:01.752 [2024-11-20 06:29:21.974178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:01.752 [2024-11-20 06:29:21.974182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:01.752 [2024-11-20 06:29:21.974188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:01.752 [2024-11-20 06:29:21.982163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:01.753 [2024-11-20 06:29:21.982210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:21.982216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:21.982221] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:01.753 [2024-11-20 06:29:21.982224] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:01.753 [2024-11-20 06:29:21.982227] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.753 [2024-11-20 06:29:21.982231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:01.753 [2024-11-20 06:29:21.990162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:01.753 [2024-11-20 06:29:21.990170] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:01.753 [2024-11-20 06:29:21.990179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:21.990185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:21.990189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:01.753 [2024-11-20 06:29:21.990192] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:01.753 [2024-11-20 06:29:21.990195] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.753 [2024-11-20 06:29:21.990199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:01.753 [2024-11-20 06:29:21.998162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:01.753 [2024-11-20 06:29:21.998173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:21.998179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:21.998184] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:01.753 [2024-11-20 06:29:21.998187] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:01.753 [2024-11-20 06:29:21.998190] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.753 [2024-11-20 06:29:21.998194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:01.753 [2024-11-20 06:29:22.006162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:01.753 [2024-11-20 06:29:22.006169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:22.006173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:22.006180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:22.006184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:22.006188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:22.006192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:22.006196] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:01.753 [2024-11-20 06:29:22.006199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:01.753 [2024-11-20 06:29:22.006203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:01.753 [2024-11-20 06:29:22.006216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:01.753 [2024-11-20 06:29:22.014162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:01.753 [2024-11-20 06:29:22.014172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:01.753 [2024-11-20 06:29:22.022163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:01.753 [2024-11-20 06:29:22.022173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:02.021 [2024-11-20 06:29:22.030163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:02.021 [2024-11-20 06:29:22.030173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:02.021 [2024-11-20 06:29:22.038163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:02.021 [2024-11-20 06:29:22.038177] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:02.021 [2024-11-20 06:29:22.038180] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:02.021 [2024-11-20 06:29:22.038183] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:02.021 [2024-11-20 06:29:22.038185] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:02.021 [2024-11-20 06:29:22.038188] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:02.021 [2024-11-20 06:29:22.038192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:02.021 [2024-11-20 06:29:22.038198] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:02.021 [2024-11-20 06:29:22.038201] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:02.021 [2024-11-20 06:29:22.038203] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:02.021 [2024-11-20 06:29:22.038207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:02.021 [2024-11-20 06:29:22.038213] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:02.021 [2024-11-20 06:29:22.038216] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:02.021 [2024-11-20 06:29:22.038218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:02.021 [2024-11-20 06:29:22.038222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:02.021 [2024-11-20 06:29:22.038228] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:02.021 [2024-11-20 06:29:22.038231] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:02.021 [2024-11-20 06:29:22.038233] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:02.021 [2024-11-20 06:29:22.038237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:02.021 [2024-11-20 06:29:22.046164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:02.021 [2024-11-20 06:29:22.046174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:02.021 [2024-11-20 06:29:22.046182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:02.021 [2024-11-20 06:29:22.046186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:02.021 ===================================================== 00:19:02.021 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:02.021 ===================================================== 00:19:02.021 Controller Capabilities/Features 00:19:02.021 ================================ 00:19:02.021 Vendor ID: 4e58 00:19:02.021 Subsystem Vendor ID: 4e58 00:19:02.021 Serial Number: SPDK2 00:19:02.021 Model Number: SPDK bdev Controller 00:19:02.021 Firmware Version: 25.01 00:19:02.021 Recommended Arb Burst: 6 00:19:02.021 IEEE OUI Identifier: 8d 6b 50 00:19:02.021 Multi-path I/O 00:19:02.021 May have multiple subsystem ports: Yes 00:19:02.021 May have multiple controllers: Yes 00:19:02.021 Associated with SR-IOV VF: No 00:19:02.021 Max Data Transfer Size: 131072 00:19:02.021 Max Number of Namespaces: 32 00:19:02.021 Max Number of I/O Queues: 127 00:19:02.021 NVMe Specification Version (VS): 1.3 00:19:02.021 NVMe Specification Version (Identify): 1.3 00:19:02.021 Maximum Queue Entries: 256 00:19:02.021 Contiguous Queues Required: Yes 00:19:02.021 Arbitration Mechanisms Supported 00:19:02.021 Weighted Round Robin: Not Supported 00:19:02.021 Vendor Specific: Not Supported 00:19:02.021 Reset Timeout: 15000 ms 00:19:02.021 Doorbell Stride: 4 bytes 00:19:02.021 NVM Subsystem Reset: Not Supported 00:19:02.021 Command Sets Supported 00:19:02.021 NVM Command Set: Supported 00:19:02.021 Boot Partition: Not Supported 00:19:02.021 Memory Page Size Minimum: 4096 bytes 00:19:02.021 Memory Page Size Maximum: 4096 bytes 00:19:02.021 Persistent Memory Region: Not Supported 00:19:02.021 Optional Asynchronous Events Supported 00:19:02.021 Namespace Attribute Notices: Supported 00:19:02.021 Firmware Activation Notices: Not Supported 00:19:02.021 ANA Change Notices: Not Supported 00:19:02.021 PLE Aggregate Log Change Notices: Not Supported 00:19:02.021 LBA Status Info Alert Notices: Not Supported 00:19:02.021 EGE Aggregate Log Change Notices: Not Supported 00:19:02.021 Normal NVM Subsystem Shutdown event: Not Supported 00:19:02.021 Zone Descriptor Change Notices: Not Supported 00:19:02.021 Discovery Log Change Notices: Not Supported 00:19:02.021 Controller Attributes 00:19:02.021 128-bit Host Identifier: Supported 00:19:02.021 Non-Operational Permissive Mode: Not Supported 00:19:02.021 NVM Sets: Not Supported 00:19:02.021 Read Recovery Levels: Not Supported 00:19:02.021 Endurance Groups: Not Supported 00:19:02.021 Predictable Latency Mode: Not Supported 00:19:02.021 Traffic Based Keep ALive: Not Supported 00:19:02.021 Namespace Granularity: Not Supported 00:19:02.021 SQ Associations: Not Supported 00:19:02.021 UUID List: Not Supported 00:19:02.021 Multi-Domain Subsystem: Not Supported 00:19:02.021 Fixed Capacity Management: Not Supported 00:19:02.021 Variable Capacity Management: Not Supported 00:19:02.021 Delete Endurance Group: Not Supported 00:19:02.021 Delete NVM Set: Not Supported 00:19:02.021 Extended LBA Formats Supported: Not Supported 00:19:02.021 Flexible Data Placement Supported: Not Supported 00:19:02.021 00:19:02.021 Controller Memory Buffer Support 00:19:02.021 ================================ 00:19:02.021 Supported: No 00:19:02.021 00:19:02.021 Persistent Memory Region Support 00:19:02.021 ================================ 00:19:02.021 Supported: No 00:19:02.021 00:19:02.021 Admin Command Set Attributes 00:19:02.021 ============================ 00:19:02.021 Security Send/Receive: Not Supported 00:19:02.021 Format NVM: Not Supported 00:19:02.021 Firmware Activate/Download: Not Supported 00:19:02.021 Namespace Management: Not Supported 00:19:02.021 Device Self-Test: Not Supported 00:19:02.021 Directives: Not Supported 00:19:02.021 NVMe-MI: Not Supported 00:19:02.021 Virtualization Management: Not Supported 00:19:02.021 Doorbell Buffer Config: Not Supported 00:19:02.021 Get LBA Status Capability: Not Supported 00:19:02.021 Command & Feature Lockdown Capability: Not Supported 00:19:02.021 Abort Command Limit: 4 00:19:02.021 Async Event Request Limit: 4 00:19:02.021 Number of Firmware Slots: N/A 00:19:02.021 Firmware Slot 1 Read-Only: N/A 00:19:02.021 Firmware Activation Without Reset: N/A 00:19:02.021 Multiple Update Detection Support: N/A 00:19:02.021 Firmware Update Granularity: No Information Provided 00:19:02.021 Per-Namespace SMART Log: No 00:19:02.021 Asymmetric Namespace Access Log Page: Not Supported 00:19:02.021 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:02.021 Command Effects Log Page: Supported 00:19:02.021 Get Log Page Extended Data: Supported 00:19:02.021 Telemetry Log Pages: Not Supported 00:19:02.021 Persistent Event Log Pages: Not Supported 00:19:02.021 Supported Log Pages Log Page: May Support 00:19:02.021 Commands Supported & Effects Log Page: Not Supported 00:19:02.021 Feature Identifiers & Effects Log Page:May Support 00:19:02.021 NVMe-MI Commands & Effects Log Page: May Support 00:19:02.021 Data Area 4 for Telemetry Log: Not Supported 00:19:02.021 Error Log Page Entries Supported: 128 00:19:02.021 Keep Alive: Supported 00:19:02.021 Keep Alive Granularity: 10000 ms 00:19:02.021 00:19:02.021 NVM Command Set Attributes 00:19:02.021 ========================== 00:19:02.021 Submission Queue Entry Size 00:19:02.021 Max: 64 00:19:02.021 Min: 64 00:19:02.021 Completion Queue Entry Size 00:19:02.021 Max: 16 00:19:02.021 Min: 16 00:19:02.021 Number of Namespaces: 32 00:19:02.021 Compare Command: Supported 00:19:02.021 Write Uncorrectable Command: Not Supported 00:19:02.021 Dataset Management Command: Supported 00:19:02.021 Write Zeroes Command: Supported 00:19:02.021 Set Features Save Field: Not Supported 00:19:02.021 Reservations: Not Supported 00:19:02.021 Timestamp: Not Supported 00:19:02.021 Copy: Supported 00:19:02.021 Volatile Write Cache: Present 00:19:02.021 Atomic Write Unit (Normal): 1 00:19:02.021 Atomic Write Unit (PFail): 1 00:19:02.021 Atomic Compare & Write Unit: 1 00:19:02.022 Fused Compare & Write: Supported 00:19:02.022 Scatter-Gather List 00:19:02.022 SGL Command Set: Supported (Dword aligned) 00:19:02.022 SGL Keyed: Not Supported 00:19:02.022 SGL Bit Bucket Descriptor: Not Supported 00:19:02.022 SGL Metadata Pointer: Not Supported 00:19:02.022 Oversized SGL: Not Supported 00:19:02.022 SGL Metadata Address: Not Supported 00:19:02.022 SGL Offset: Not Supported 00:19:02.022 Transport SGL Data Block: Not Supported 00:19:02.022 Replay Protected Memory Block: Not Supported 00:19:02.022 00:19:02.022 Firmware Slot Information 00:19:02.022 ========================= 00:19:02.022 Active slot: 1 00:19:02.022 Slot 1 Firmware Revision: 25.01 00:19:02.022 00:19:02.022 00:19:02.022 Commands Supported and Effects 00:19:02.022 ============================== 00:19:02.022 Admin Commands 00:19:02.022 -------------- 00:19:02.022 Get Log Page (02h): Supported 00:19:02.022 Identify (06h): Supported 00:19:02.022 Abort (08h): Supported 00:19:02.022 Set Features (09h): Supported 00:19:02.022 Get Features (0Ah): Supported 00:19:02.022 Asynchronous Event Request (0Ch): Supported 00:19:02.022 Keep Alive (18h): Supported 00:19:02.022 I/O Commands 00:19:02.022 ------------ 00:19:02.022 Flush (00h): Supported LBA-Change 00:19:02.022 Write (01h): Supported LBA-Change 00:19:02.022 Read (02h): Supported 00:19:02.022 Compare (05h): Supported 00:19:02.022 Write Zeroes (08h): Supported LBA-Change 00:19:02.022 Dataset Management (09h): Supported LBA-Change 00:19:02.022 Copy (19h): Supported LBA-Change 00:19:02.022 00:19:02.022 Error Log 00:19:02.022 ========= 00:19:02.022 00:19:02.022 Arbitration 00:19:02.022 =========== 00:19:02.022 Arbitration Burst: 1 00:19:02.022 00:19:02.022 Power Management 00:19:02.022 ================ 00:19:02.022 Number of Power States: 1 00:19:02.022 Current Power State: Power State #0 00:19:02.022 Power State #0: 00:19:02.022 Max Power: 0.00 W 00:19:02.022 Non-Operational State: Operational 00:19:02.022 Entry Latency: Not Reported 00:19:02.022 Exit Latency: Not Reported 00:19:02.022 Relative Read Throughput: 0 00:19:02.022 Relative Read Latency: 0 00:19:02.022 Relative Write Throughput: 0 00:19:02.022 Relative Write Latency: 0 00:19:02.022 Idle Power: Not Reported 00:19:02.022 Active Power: Not Reported 00:19:02.022 Non-Operational Permissive Mode: Not Supported 00:19:02.022 00:19:02.022 Health Information 00:19:02.022 ================== 00:19:02.022 Critical Warnings: 00:19:02.022 Available Spare Space: OK 00:19:02.022 Temperature: OK 00:19:02.022 Device Reliability: OK 00:19:02.022 Read Only: No 00:19:02.022 Volatile Memory Backup: OK 00:19:02.022 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:02.022 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:02.022 Available Spare: 0% 00:19:02.022 Available Sp[2024-11-20 06:29:22.046259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:02.022 [2024-11-20 06:29:22.054161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:02.022 [2024-11-20 06:29:22.054183] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:02.022 [2024-11-20 06:29:22.054190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.022 [2024-11-20 06:29:22.054195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.022 [2024-11-20 06:29:22.054199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.022 [2024-11-20 06:29:22.054205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.022 [2024-11-20 06:29:22.054232] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:02.022 [2024-11-20 06:29:22.054239] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:02.022 [2024-11-20 06:29:22.055241] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:02.022 [2024-11-20 06:29:22.055276] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:02.022 [2024-11-20 06:29:22.055281] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:02.022 [2024-11-20 06:29:22.056241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:02.022 [2024-11-20 06:29:22.056250] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:02.022 [2024-11-20 06:29:22.056290] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:02.022 [2024-11-20 06:29:22.059163] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:02.022 are Threshold: 0% 00:19:02.022 Life Percentage Used: 0% 00:19:02.022 Data Units Read: 0 00:19:02.022 Data Units Written: 0 00:19:02.022 Host Read Commands: 0 00:19:02.022 Host Write Commands: 0 00:19:02.022 Controller Busy Time: 0 minutes 00:19:02.022 Power Cycles: 0 00:19:02.022 Power On Hours: 0 hours 00:19:02.022 Unsafe Shutdowns: 0 00:19:02.022 Unrecoverable Media Errors: 0 00:19:02.022 Lifetime Error Log Entries: 0 00:19:02.022 Warning Temperature Time: 0 minutes 00:19:02.022 Critical Temperature Time: 0 minutes 00:19:02.022 00:19:02.022 Number of Queues 00:19:02.022 ================ 00:19:02.022 Number of I/O Submission Queues: 127 00:19:02.022 Number of I/O Completion Queues: 127 00:19:02.022 00:19:02.022 Active Namespaces 00:19:02.022 ================= 00:19:02.022 Namespace ID:1 00:19:02.022 Error Recovery Timeout: Unlimited 00:19:02.022 Command Set Identifier: NVM (00h) 00:19:02.022 Deallocate: Supported 00:19:02.022 Deallocated/Unwritten Error: Not Supported 00:19:02.022 Deallocated Read Value: Unknown 00:19:02.022 Deallocate in Write Zeroes: Not Supported 00:19:02.022 Deallocated Guard Field: 0xFFFF 00:19:02.022 Flush: Supported 00:19:02.022 Reservation: Supported 00:19:02.022 Namespace Sharing Capabilities: Multiple Controllers 00:19:02.022 Size (in LBAs): 131072 (0GiB) 00:19:02.022 Capacity (in LBAs): 131072 (0GiB) 00:19:02.022 Utilization (in LBAs): 131072 (0GiB) 00:19:02.022 NGUID: 268BAD6DDFAE45AC814C391A53B761F8 00:19:02.022 UUID: 268bad6d-dfae-45ac-814c-391a53b761f8 00:19:02.022 Thin Provisioning: Not Supported 00:19:02.022 Per-NS Atomic Units: Yes 00:19:02.022 Atomic Boundary Size (Normal): 0 00:19:02.022 Atomic Boundary Size (PFail): 0 00:19:02.022 Atomic Boundary Offset: 0 00:19:02.022 Maximum Single Source Range Length: 65535 00:19:02.022 Maximum Copy Length: 65535 00:19:02.022 Maximum Source Range Count: 1 00:19:02.022 NGUID/EUI64 Never Reused: No 00:19:02.022 Namespace Write Protected: No 00:19:02.022 Number of LBA Formats: 1 00:19:02.022 Current LBA Format: LBA Format #00 00:19:02.022 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:02.022 00:19:02.022 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:02.022 [2024-11-20 06:29:22.245194] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:07.316 Initializing NVMe Controllers 00:19:07.316 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:07.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:07.316 Initialization complete. Launching workers. 00:19:07.316 ======================================================== 00:19:07.316 Latency(us) 00:19:07.316 Device Information : IOPS MiB/s Average min max 00:19:07.316 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39978.80 156.17 3201.56 833.07 6851.93 00:19:07.316 ======================================================== 00:19:07.316 Total : 39978.80 156.17 3201.56 833.07 6851.93 00:19:07.316 00:19:07.316 [2024-11-20 06:29:27.353348] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:07.316 06:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:07.316 [2024-11-20 06:29:27.545947] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:12.606 Initializing NVMe Controllers 00:19:12.606 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:12.606 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:12.606 Initialization complete. Launching workers. 00:19:12.606 ======================================================== 00:19:12.606 Latency(us) 00:19:12.606 Device Information : IOPS MiB/s Average min max 00:19:12.606 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39975.27 156.15 3201.66 839.35 9778.35 00:19:12.606 ======================================================== 00:19:12.606 Total : 39975.27 156.15 3201.66 839.35 9778.35 00:19:12.606 00:19:12.606 [2024-11-20 06:29:32.565084] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:12.606 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:12.606 [2024-11-20 06:29:32.765299] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:17.994 [2024-11-20 06:29:37.905251] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:17.994 Initializing NVMe Controllers 00:19:17.994 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:17.994 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:17.994 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:17.994 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:17.994 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:17.994 Initialization complete. Launching workers. 00:19:17.994 Starting thread on core 2 00:19:17.994 Starting thread on core 3 00:19:17.994 Starting thread on core 1 00:19:17.994 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:17.994 [2024-11-20 06:29:38.160595] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:21.294 [2024-11-20 06:29:41.212298] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:21.294 Initializing NVMe Controllers 00:19:21.294 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:21.294 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:21.294 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:21.294 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:21.294 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:21.294 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:21.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:21.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:21.294 Initialization complete. Launching workers. 00:19:21.294 Starting thread on core 1 with urgent priority queue 00:19:21.294 Starting thread on core 2 with urgent priority queue 00:19:21.294 Starting thread on core 3 with urgent priority queue 00:19:21.294 Starting thread on core 0 with urgent priority queue 00:19:21.294 SPDK bdev Controller (SPDK2 ) core 0: 10633.67 IO/s 9.40 secs/100000 ios 00:19:21.294 SPDK bdev Controller (SPDK2 ) core 1: 8281.67 IO/s 12.07 secs/100000 ios 00:19:21.294 SPDK bdev Controller (SPDK2 ) core 2: 13227.33 IO/s 7.56 secs/100000 ios 00:19:21.294 SPDK bdev Controller (SPDK2 ) core 3: 10409.33 IO/s 9.61 secs/100000 ios 00:19:21.294 ======================================================== 00:19:21.294 00:19:21.294 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:21.294 [2024-11-20 06:29:41.452182] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:21.294 Initializing NVMe Controllers 00:19:21.294 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:21.294 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:21.294 Namespace ID: 1 size: 0GB 00:19:21.294 Initialization complete. 00:19:21.294 INFO: using host memory buffer for IO 00:19:21.294 Hello world! 00:19:21.294 [2024-11-20 06:29:41.464257] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:21.294 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:21.554 [2024-11-20 06:29:41.701537] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:22.940 Initializing NVMe Controllers 00:19:22.940 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:22.940 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:22.940 Initialization complete. Launching workers. 00:19:22.940 submit (in ns) avg, min, max = 6104.8, 2835.0, 3998482.5 00:19:22.940 complete (in ns) avg, min, max = 13243.0, 1640.0, 3998026.7 00:19:22.940 00:19:22.940 Submit histogram 00:19:22.940 ================ 00:19:22.940 Range in us Cumulative Count 00:19:22.940 2.827 - 2.840: 0.0195% ( 4) 00:19:22.940 2.840 - 2.853: 0.9822% ( 197) 00:19:22.940 2.853 - 2.867: 3.3913% ( 493) 00:19:22.940 2.867 - 2.880: 7.5889% ( 859) 00:19:22.940 2.880 - 2.893: 12.6710% ( 1040) 00:19:22.940 2.893 - 2.907: 17.7287% ( 1035) 00:19:22.940 2.907 - 2.920: 22.7277% ( 1023) 00:19:22.940 2.920 - 2.933: 29.0950% ( 1303) 00:19:22.940 2.933 - 2.947: 35.1056% ( 1230) 00:19:22.940 2.947 - 2.960: 40.7789% ( 1161) 00:19:22.940 2.960 - 2.973: 46.1249% ( 1094) 00:19:22.940 2.973 - 2.987: 51.4806% ( 1096) 00:19:22.940 2.987 - 3.000: 58.0923% ( 1353) 00:19:22.940 3.000 - 3.013: 67.6212% ( 1950) 00:19:22.940 3.013 - 3.027: 76.0897% ( 1733) 00:19:22.940 3.027 - 3.040: 83.3024% ( 1476) 00:19:22.940 3.040 - 3.053: 89.3227% ( 1232) 00:19:22.940 3.053 - 3.067: 93.8038% ( 917) 00:19:22.940 3.067 - 3.080: 97.0729% ( 669) 00:19:22.940 3.080 - 3.093: 98.4754% ( 287) 00:19:22.940 3.093 - 3.107: 99.1839% ( 145) 00:19:22.940 3.107 - 3.120: 99.4136% ( 47) 00:19:22.940 3.120 - 3.133: 99.4967% ( 17) 00:19:22.940 3.133 - 3.147: 99.5358% ( 8) 00:19:22.940 3.147 - 3.160: 99.5553% ( 4) 00:19:22.940 3.160 - 3.173: 99.5700% ( 3) 00:19:22.940 3.173 - 3.187: 99.5797% ( 2) 00:19:22.940 3.213 - 3.227: 99.5846% ( 1) 00:19:22.940 3.280 - 3.293: 99.5895% ( 1) 00:19:22.940 3.493 - 3.520: 99.6042% ( 3) 00:19:22.940 3.573 - 3.600: 99.6091% ( 1) 00:19:22.940 3.627 - 3.653: 99.6140% ( 1) 00:19:22.940 3.680 - 3.707: 99.6188% ( 1) 00:19:22.940 3.760 - 3.787: 99.6237% ( 1) 00:19:22.940 3.973 - 4.000: 99.6286% ( 1) 00:19:22.940 4.027 - 4.053: 99.6335% ( 1) 00:19:22.940 4.293 - 4.320: 99.6384% ( 1) 00:19:22.940 4.453 - 4.480: 99.6433% ( 1) 00:19:22.940 4.507 - 4.533: 99.6482% ( 1) 00:19:22.940 4.533 - 4.560: 99.6530% ( 1) 00:19:22.940 4.613 - 4.640: 99.6628% ( 2) 00:19:22.940 4.640 - 4.667: 99.6677% ( 1) 00:19:22.940 4.667 - 4.693: 99.6726% ( 1) 00:19:22.940 4.720 - 4.747: 99.6775% ( 1) 00:19:22.940 4.747 - 4.773: 99.6824% ( 1) 00:19:22.940 4.773 - 4.800: 99.6921% ( 2) 00:19:22.940 4.827 - 4.853: 99.6970% ( 1) 00:19:22.940 4.880 - 4.907: 99.7019% ( 1) 00:19:22.940 4.907 - 4.933: 99.7068% ( 1) 00:19:22.940 4.933 - 4.960: 99.7166% ( 2) 00:19:22.940 4.960 - 4.987: 99.7312% ( 3) 00:19:22.940 5.013 - 5.040: 99.7410% ( 2) 00:19:22.940 5.040 - 5.067: 99.7508% ( 2) 00:19:22.940 5.067 - 5.093: 99.7557% ( 1) 00:19:22.940 5.093 - 5.120: 99.7606% ( 1) 00:19:22.940 5.120 - 5.147: 99.7703% ( 2) 00:19:22.940 5.147 - 5.173: 99.7801% ( 2) 00:19:22.940 5.173 - 5.200: 99.7899% ( 2) 00:19:22.940 5.200 - 5.227: 99.8045% ( 3) 00:19:22.940 5.280 - 5.307: 99.8094% ( 1) 00:19:22.940 5.360 - 5.387: 99.8143% ( 1) 00:19:22.940 5.387 - 5.413: 99.8192% ( 1) 00:19:22.940 5.413 - 5.440: 99.8339% ( 3) 00:19:22.940 5.493 - 5.520: 99.8436% ( 2) 00:19:22.940 5.520 - 5.547: 99.8534% ( 2) 00:19:22.940 5.547 - 5.573: 99.8632% ( 2) 00:19:22.940 5.733 - 5.760: 99.8681% ( 1) 00:19:22.940 5.867 - 5.893: 99.8729% ( 1) 00:19:22.940 6.000 - 6.027: 99.8827% ( 2) 00:19:22.940 6.053 - 6.080: 99.8876% ( 1) 00:19:22.940 6.107 - 6.133: 99.8925% ( 1) 00:19:22.940 6.187 - 6.213: 99.8974% ( 1) 00:19:22.940 6.400 - 6.427: 99.9023% ( 1) 00:19:22.940 6.667 - 6.693: 99.9072% ( 1) 00:19:22.941 7.520 - 7.573: 99.9120% ( 1) 00:19:22.941 8.267 - 8.320: 99.9169% ( 1) 00:19:22.941 9.760 - 9.813: 99.9218% ( 1) 00:19:22.941 3986.773 - 4014.080: 100.0000% ( 16) 00:19:22.941 00:19:22.941 Complete histogram 00:19:22.941 ================== 00:19:22.941 Ra[2024-11-20 06:29:42.794682] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:22.941 nge in us Cumulative Count 00:19:22.941 1.640 - 1.647: 0.4935% ( 101) 00:19:22.941 1.647 - 1.653: 0.9138% ( 86) 00:19:22.941 1.653 - 1.660: 0.9969% ( 17) 00:19:22.941 1.660 - 1.667: 1.0897% ( 19) 00:19:22.941 1.667 - 1.673: 1.1581% ( 14) 00:19:22.941 1.673 - 1.680: 1.2070% ( 10) 00:19:22.941 1.680 - 1.687: 1.2217% ( 3) 00:19:22.941 1.687 - 1.693: 1.2363% ( 3) 00:19:22.941 1.693 - 1.700: 1.2461% ( 2) 00:19:22.941 1.700 - 1.707: 10.8923% ( 1974) 00:19:22.941 1.707 - 1.720: 60.0078% ( 10051) 00:19:22.941 1.720 - 1.733: 76.0115% ( 3275) 00:19:22.941 1.733 - 1.747: 82.2078% ( 1268) 00:19:22.941 1.747 - 1.760: 83.5711% ( 279) 00:19:22.941 1.760 - 1.773: 87.1384% ( 730) 00:19:22.941 1.773 - 1.787: 92.8069% ( 1160) 00:19:22.941 1.787 - 1.800: 97.0729% ( 873) 00:19:22.941 1.800 - 1.813: 98.7735% ( 348) 00:19:22.941 1.813 - 1.827: 99.4136% ( 131) 00:19:22.941 1.827 - 1.840: 99.5504% ( 28) 00:19:22.941 1.840 - 1.853: 99.5602% ( 2) 00:19:22.941 1.853 - 1.867: 99.5651% ( 1) 00:19:22.941 3.307 - 3.320: 99.5700% ( 1) 00:19:22.941 3.360 - 3.373: 99.5749% ( 1) 00:19:22.941 3.547 - 3.573: 99.5797% ( 1) 00:19:22.941 3.600 - 3.627: 99.5895% ( 2) 00:19:22.941 3.680 - 3.707: 99.5993% ( 2) 00:19:22.941 3.760 - 3.787: 99.6091% ( 2) 00:19:22.941 3.787 - 3.813: 99.6188% ( 2) 00:19:22.941 3.813 - 3.840: 99.6237% ( 1) 00:19:22.941 3.840 - 3.867: 99.6335% ( 2) 00:19:22.941 3.893 - 3.920: 99.6384% ( 1) 00:19:22.941 3.947 - 3.973: 99.6433% ( 1) 00:19:22.941 4.000 - 4.027: 99.6482% ( 1) 00:19:22.941 4.027 - 4.053: 99.6530% ( 1) 00:19:22.941 4.187 - 4.213: 99.6579% ( 1) 00:19:22.941 4.240 - 4.267: 99.6726% ( 3) 00:19:22.941 4.267 - 4.293: 99.6775% ( 1) 00:19:22.941 4.293 - 4.320: 99.6873% ( 2) 00:19:22.941 4.320 - 4.347: 99.6921% ( 1) 00:19:22.941 4.693 - 4.720: 99.6970% ( 1) 00:19:22.941 4.907 - 4.933: 99.7019% ( 1) 00:19:22.941 4.933 - 4.960: 99.7068% ( 1) 00:19:22.941 8.427 - 8.480: 99.7117% ( 1) 00:19:22.941 3986.773 - 4014.080: 100.0000% ( 59) 00:19:22.941 00:19:22.941 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:22.941 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:22.941 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:22.941 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:22.941 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:22.941 [ 00:19:22.941 { 00:19:22.941 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:22.941 "subtype": "Discovery", 00:19:22.941 "listen_addresses": [], 00:19:22.941 "allow_any_host": true, 00:19:22.941 "hosts": [] 00:19:22.941 }, 00:19:22.941 { 00:19:22.941 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:22.941 "subtype": "NVMe", 00:19:22.941 "listen_addresses": [ 00:19:22.941 { 00:19:22.941 "trtype": "VFIOUSER", 00:19:22.941 "adrfam": "IPv4", 00:19:22.941 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:22.941 "trsvcid": "0" 00:19:22.941 } 00:19:22.941 ], 00:19:22.941 "allow_any_host": true, 00:19:22.941 "hosts": [], 00:19:22.941 "serial_number": "SPDK1", 00:19:22.941 "model_number": "SPDK bdev Controller", 00:19:22.941 "max_namespaces": 32, 00:19:22.941 "min_cntlid": 1, 00:19:22.941 "max_cntlid": 65519, 00:19:22.941 "namespaces": [ 00:19:22.941 { 00:19:22.941 "nsid": 1, 00:19:22.941 "bdev_name": "Malloc1", 00:19:22.941 "name": "Malloc1", 00:19:22.941 "nguid": "A25F654EC97A4F4C8DBA45A12BF5CACF", 00:19:22.941 "uuid": "a25f654e-c97a-4f4c-8dba-45a12bf5cacf" 00:19:22.941 }, 00:19:22.941 { 00:19:22.941 "nsid": 2, 00:19:22.941 "bdev_name": "Malloc3", 00:19:22.941 "name": "Malloc3", 00:19:22.941 "nguid": "F9CB00064BAE44F79E69D6EF22760C9F", 00:19:22.941 "uuid": "f9cb0006-4bae-44f7-9e69-d6ef22760c9f" 00:19:22.941 } 00:19:22.941 ] 00:19:22.941 }, 00:19:22.941 { 00:19:22.941 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:22.941 "subtype": "NVMe", 00:19:22.941 "listen_addresses": [ 00:19:22.941 { 00:19:22.941 "trtype": "VFIOUSER", 00:19:22.941 "adrfam": "IPv4", 00:19:22.941 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:22.941 "trsvcid": "0" 00:19:22.941 } 00:19:22.941 ], 00:19:22.941 "allow_any_host": true, 00:19:22.941 "hosts": [], 00:19:22.941 "serial_number": "SPDK2", 00:19:22.941 "model_number": "SPDK bdev Controller", 00:19:22.941 "max_namespaces": 32, 00:19:22.941 "min_cntlid": 1, 00:19:22.941 "max_cntlid": 65519, 00:19:22.941 "namespaces": [ 00:19:22.941 { 00:19:22.941 "nsid": 1, 00:19:22.941 "bdev_name": "Malloc2", 00:19:22.941 "name": "Malloc2", 00:19:22.941 "nguid": "268BAD6DDFAE45AC814C391A53B761F8", 00:19:22.941 "uuid": "268bad6d-dfae-45ac-814c-391a53b761f8" 00:19:22.941 } 00:19:22.941 ] 00:19:22.941 } 00:19:22.941 ] 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2796400 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:22.941 [2024-11-20 06:29:43.174544] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:22.941 Malloc4 00:19:22.941 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:23.202 [2024-11-20 06:29:43.368938] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:23.202 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:23.202 Asynchronous Event Request test 00:19:23.202 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:23.202 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:23.202 Registering asynchronous event callbacks... 00:19:23.202 Starting namespace attribute notice tests for all controllers... 00:19:23.202 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:23.202 aer_cb - Changed Namespace 00:19:23.202 Cleaning up... 00:19:23.463 [ 00:19:23.463 { 00:19:23.463 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:23.463 "subtype": "Discovery", 00:19:23.463 "listen_addresses": [], 00:19:23.463 "allow_any_host": true, 00:19:23.463 "hosts": [] 00:19:23.463 }, 00:19:23.463 { 00:19:23.463 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:23.463 "subtype": "NVMe", 00:19:23.463 "listen_addresses": [ 00:19:23.463 { 00:19:23.463 "trtype": "VFIOUSER", 00:19:23.463 "adrfam": "IPv4", 00:19:23.463 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:23.463 "trsvcid": "0" 00:19:23.463 } 00:19:23.463 ], 00:19:23.463 "allow_any_host": true, 00:19:23.463 "hosts": [], 00:19:23.463 "serial_number": "SPDK1", 00:19:23.463 "model_number": "SPDK bdev Controller", 00:19:23.463 "max_namespaces": 32, 00:19:23.463 "min_cntlid": 1, 00:19:23.463 "max_cntlid": 65519, 00:19:23.463 "namespaces": [ 00:19:23.463 { 00:19:23.463 "nsid": 1, 00:19:23.463 "bdev_name": "Malloc1", 00:19:23.463 "name": "Malloc1", 00:19:23.463 "nguid": "A25F654EC97A4F4C8DBA45A12BF5CACF", 00:19:23.463 "uuid": "a25f654e-c97a-4f4c-8dba-45a12bf5cacf" 00:19:23.463 }, 00:19:23.463 { 00:19:23.463 "nsid": 2, 00:19:23.463 "bdev_name": "Malloc3", 00:19:23.463 "name": "Malloc3", 00:19:23.463 "nguid": "F9CB00064BAE44F79E69D6EF22760C9F", 00:19:23.463 "uuid": "f9cb0006-4bae-44f7-9e69-d6ef22760c9f" 00:19:23.463 } 00:19:23.463 ] 00:19:23.463 }, 00:19:23.463 { 00:19:23.463 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:23.463 "subtype": "NVMe", 00:19:23.463 "listen_addresses": [ 00:19:23.463 { 00:19:23.463 "trtype": "VFIOUSER", 00:19:23.463 "adrfam": "IPv4", 00:19:23.463 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:23.463 "trsvcid": "0" 00:19:23.463 } 00:19:23.463 ], 00:19:23.463 "allow_any_host": true, 00:19:23.463 "hosts": [], 00:19:23.463 "serial_number": "SPDK2", 00:19:23.463 "model_number": "SPDK bdev Controller", 00:19:23.463 "max_namespaces": 32, 00:19:23.463 "min_cntlid": 1, 00:19:23.463 "max_cntlid": 65519, 00:19:23.463 "namespaces": [ 00:19:23.463 { 00:19:23.463 "nsid": 1, 00:19:23.463 "bdev_name": "Malloc2", 00:19:23.463 "name": "Malloc2", 00:19:23.463 "nguid": "268BAD6DDFAE45AC814C391A53B761F8", 00:19:23.463 "uuid": "268bad6d-dfae-45ac-814c-391a53b761f8" 00:19:23.463 }, 00:19:23.463 { 00:19:23.463 "nsid": 2, 00:19:23.463 "bdev_name": "Malloc4", 00:19:23.463 "name": "Malloc4", 00:19:23.463 "nguid": "37F9671E73AE4674A78676E1A3F6D58F", 00:19:23.463 "uuid": "37f9671e-73ae-4674-a786-76e1a3f6d58f" 00:19:23.463 } 00:19:23.463 ] 00:19:23.463 } 00:19:23.463 ] 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2796400 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2787594 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2787594 ']' 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2787594 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2787594 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2787594' 00:19:23.463 killing process with pid 2787594 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2787594 00:19:23.463 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2787594 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2796693 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2796693' 00:19:23.724 Process pid: 2796693 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2796693 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2796693 ']' 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.724 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:23.724 [2024-11-20 06:29:43.842179] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:23.724 [2024-11-20 06:29:43.843089] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:19:23.724 [2024-11-20 06:29:43.843133] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.724 [2024-11-20 06:29:43.928266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.724 [2024-11-20 06:29:43.958562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.724 [2024-11-20 06:29:43.958594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.724 [2024-11-20 06:29:43.958599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.724 [2024-11-20 06:29:43.958604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.724 [2024-11-20 06:29:43.958608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.724 [2024-11-20 06:29:43.959939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.724 [2024-11-20 06:29:43.960089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.724 [2024-11-20 06:29:43.960240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.724 [2024-11-20 06:29:43.960242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.985 [2024-11-20 06:29:44.012019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:23.985 [2024-11-20 06:29:44.012963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:23.985 [2024-11-20 06:29:44.013891] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:23.985 [2024-11-20 06:29:44.014632] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:23.985 [2024-11-20 06:29:44.014656] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:24.556 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:24.556 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:19:24.556 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:25.498 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:25.758 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:25.758 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:25.759 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:25.759 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:25.759 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:25.759 Malloc1 00:19:26.019 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:26.019 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:26.280 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:26.541 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:26.541 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:26.541 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:26.541 Malloc2 00:19:26.541 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:26.802 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:27.063 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:27.324 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2796693 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2796693 ']' 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2796693 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2796693 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2796693' 00:19:27.325 killing process with pid 2796693 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2796693 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2796693 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:27.325 00:19:27.325 real 0m50.952s 00:19:27.325 user 3m15.318s 00:19:27.325 sys 0m2.683s 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:27.325 ************************************ 00:19:27.325 END TEST nvmf_vfio_user 00:19:27.325 ************************************ 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:27.325 06:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:27.586 ************************************ 00:19:27.586 START TEST nvmf_vfio_user_nvme_compliance 00:19:27.586 ************************************ 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:27.586 * Looking for test storage... 00:19:27.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:27.586 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:27.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.587 --rc genhtml_branch_coverage=1 00:19:27.587 --rc genhtml_function_coverage=1 00:19:27.587 --rc genhtml_legend=1 00:19:27.587 --rc geninfo_all_blocks=1 00:19:27.587 --rc geninfo_unexecuted_blocks=1 00:19:27.587 00:19:27.587 ' 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:27.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.587 --rc genhtml_branch_coverage=1 00:19:27.587 --rc genhtml_function_coverage=1 00:19:27.587 --rc genhtml_legend=1 00:19:27.587 --rc geninfo_all_blocks=1 00:19:27.587 --rc geninfo_unexecuted_blocks=1 00:19:27.587 00:19:27.587 ' 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:27.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.587 --rc genhtml_branch_coverage=1 00:19:27.587 --rc genhtml_function_coverage=1 00:19:27.587 --rc genhtml_legend=1 00:19:27.587 --rc geninfo_all_blocks=1 00:19:27.587 --rc geninfo_unexecuted_blocks=1 00:19:27.587 00:19:27.587 ' 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:27.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.587 --rc genhtml_branch_coverage=1 00:19:27.587 --rc genhtml_function_coverage=1 00:19:27.587 --rc genhtml_legend=1 00:19:27.587 --rc geninfo_all_blocks=1 00:19:27.587 --rc geninfo_unexecuted_blocks=1 00:19:27.587 00:19:27.587 ' 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:27.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:27.587 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2797451 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2797451' 00:19:27.849 Process pid: 2797451 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2797451 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 2797451 ']' 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:27.849 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:27.849 [2024-11-20 06:29:47.918553] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:19:27.849 [2024-11-20 06:29:47.918618] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.849 [2024-11-20 06:29:48.006530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:27.849 [2024-11-20 06:29:48.040118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.849 [2024-11-20 06:29:48.040154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.849 [2024-11-20 06:29:48.040166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.849 [2024-11-20 06:29:48.040171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.849 [2024-11-20 06:29:48.040175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.849 [2024-11-20 06:29:48.041417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.849 [2024-11-20 06:29:48.041570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.849 [2024-11-20 06:29:48.041572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.792 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:28.792 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:19:28.792 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.739 malloc0 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.739 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:29.739 00:19:29.739 00:19:29.739 CUnit - A unit testing framework for C - Version 2.1-3 00:19:29.739 http://cunit.sourceforge.net/ 00:19:29.739 00:19:29.739 00:19:29.739 Suite: nvme_compliance 00:19:29.739 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 06:29:49.968347] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:29.739 [2024-11-20 06:29:49.969650] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:29.740 [2024-11-20 06:29:49.969665] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:29.740 [2024-11-20 06:29:49.969672] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:29.740 [2024-11-20 06:29:49.972370] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:29.740 passed 00:19:30.000 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 06:29:50.047876] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.000 [2024-11-20 06:29:50.051900] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.000 passed 00:19:30.000 Test: admin_identify_ns ...[2024-11-20 06:29:50.127498] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.000 [2024-11-20 06:29:50.188167] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:30.000 [2024-11-20 06:29:50.196170] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:30.000 [2024-11-20 06:29:50.217245] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.000 passed 00:19:30.261 Test: admin_get_features_mandatory_features ...[2024-11-20 06:29:50.293335] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.261 [2024-11-20 06:29:50.296360] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.261 passed 00:19:30.261 Test: admin_get_features_optional_features ...[2024-11-20 06:29:50.372790] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.261 [2024-11-20 06:29:50.375807] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.261 passed 00:19:30.261 Test: admin_set_features_number_of_queues ...[2024-11-20 06:29:50.450511] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.521 [2024-11-20 06:29:50.559268] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.521 passed 00:19:30.521 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 06:29:50.633493] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.521 [2024-11-20 06:29:50.636515] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.521 passed 00:19:30.521 Test: admin_get_log_page_with_lpo ...[2024-11-20 06:29:50.711263] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.521 [2024-11-20 06:29:50.781166] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:30.521 [2024-11-20 06:29:50.794210] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.782 passed 00:19:30.782 Test: fabric_property_get ...[2024-11-20 06:29:50.867445] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.782 [2024-11-20 06:29:50.868647] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:30.782 [2024-11-20 06:29:50.870470] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.782 passed 00:19:30.782 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 06:29:50.946917] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.782 [2024-11-20 06:29:50.948115] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:30.782 [2024-11-20 06:29:50.949939] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.782 passed 00:19:30.782 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 06:29:51.025664] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.042 [2024-11-20 06:29:51.110164] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:31.042 [2024-11-20 06:29:51.126168] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:31.042 [2024-11-20 06:29:51.131248] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.042 passed 00:19:31.043 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 06:29:51.204473] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.043 [2024-11-20 06:29:51.205676] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:31.043 [2024-11-20 06:29:51.207490] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.043 passed 00:19:31.043 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 06:29:51.282494] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.303 [2024-11-20 06:29:51.362166] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:31.303 [2024-11-20 06:29:51.386163] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:31.303 [2024-11-20 06:29:51.391232] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.303 passed 00:19:31.303 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 06:29:51.463385] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.303 [2024-11-20 06:29:51.464588] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:31.303 [2024-11-20 06:29:51.464604] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:31.303 [2024-11-20 06:29:51.467413] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.303 passed 00:19:31.303 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 06:29:51.542508] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.563 [2024-11-20 06:29:51.630167] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:31.563 [2024-11-20 06:29:51.638171] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:31.564 [2024-11-20 06:29:51.645163] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:31.564 [2024-11-20 06:29:51.650163] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:31.564 [2024-11-20 06:29:51.674232] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.564 passed 00:19:31.564 Test: admin_create_io_sq_verify_pc ...[2024-11-20 06:29:51.746430] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.564 [2024-11-20 06:29:51.773170] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:31.564 [2024-11-20 06:29:51.790631] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.564 passed 00:19:31.824 Test: admin_create_io_qp_max_qps ...[2024-11-20 06:29:51.866082] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:32.766 [2024-11-20 06:29:52.995167] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:33.337 [2024-11-20 06:29:53.375358] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.337 passed 00:19:33.337 Test: admin_create_io_sq_shared_cq ...[2024-11-20 06:29:53.449108] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.337 [2024-11-20 06:29:53.582165] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:33.599 [2024-11-20 06:29:53.619207] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.599 passed 00:19:33.599 00:19:33.599 Run Summary: Type Total Ran Passed Failed Inactive 00:19:33.599 suites 1 1 n/a 0 0 00:19:33.599 tests 18 18 18 0 0 00:19:33.599 asserts 360 360 360 0 n/a 00:19:33.599 00:19:33.599 Elapsed time = 1.499 seconds 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2797451 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 2797451 ']' 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 2797451 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2797451 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2797451' 00:19:33.599 killing process with pid 2797451 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 2797451 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 2797451 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:33.599 00:19:33.599 real 0m6.212s 00:19:33.599 user 0m17.624s 00:19:33.599 sys 0m0.548s 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:33.599 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:33.599 ************************************ 00:19:33.599 END TEST nvmf_vfio_user_nvme_compliance 00:19:33.599 ************************************ 00:19:33.861 06:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:33.861 06:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:33.861 06:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:33.861 06:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.861 ************************************ 00:19:33.861 START TEST nvmf_vfio_user_fuzz 00:19:33.861 ************************************ 00:19:33.861 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:33.861 * Looking for test storage... 00:19:33.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.861 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:33.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.862 --rc genhtml_branch_coverage=1 00:19:33.862 --rc genhtml_function_coverage=1 00:19:33.862 --rc genhtml_legend=1 00:19:33.862 --rc geninfo_all_blocks=1 00:19:33.862 --rc geninfo_unexecuted_blocks=1 00:19:33.862 00:19:33.862 ' 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:33.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.862 --rc genhtml_branch_coverage=1 00:19:33.862 --rc genhtml_function_coverage=1 00:19:33.862 --rc genhtml_legend=1 00:19:33.862 --rc geninfo_all_blocks=1 00:19:33.862 --rc geninfo_unexecuted_blocks=1 00:19:33.862 00:19:33.862 ' 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:33.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.862 --rc genhtml_branch_coverage=1 00:19:33.862 --rc genhtml_function_coverage=1 00:19:33.862 --rc genhtml_legend=1 00:19:33.862 --rc geninfo_all_blocks=1 00:19:33.862 --rc geninfo_unexecuted_blocks=1 00:19:33.862 00:19:33.862 ' 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:33.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.862 --rc genhtml_branch_coverage=1 00:19:33.862 --rc genhtml_function_coverage=1 00:19:33.862 --rc genhtml_legend=1 00:19:33.862 --rc geninfo_all_blocks=1 00:19:33.862 --rc geninfo_unexecuted_blocks=1 00:19:33.862 00:19:33.862 ' 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.862 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.124 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.124 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.124 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.124 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.124 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.124 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.124 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.124 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2798859 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2798859' 00:19:34.125 Process pid: 2798859 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2798859 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 2798859 ']' 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.125 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:35.067 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.067 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:19:35.067 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:36.010 malloc0 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:36.010 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:08.132 Fuzzing completed. Shutting down the fuzz application 00:20:08.132 00:20:08.132 Dumping successful admin opcodes: 00:20:08.132 8, 9, 10, 24, 00:20:08.132 Dumping successful io opcodes: 00:20:08.132 0, 00:20:08.132 NS: 0x20000081ef00 I/O qp, Total commands completed: 1233270, total successful commands: 4843, random_seed: 1227637696 00:20:08.132 NS: 0x20000081ef00 admin qp, Total commands completed: 258177, total successful commands: 2083, random_seed: 3985021760 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2798859 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 2798859 ']' 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 2798859 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2798859 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2798859' 00:20:08.132 killing process with pid 2798859 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 2798859 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 2798859 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:08.132 00:20:08.132 real 0m32.787s 00:20:08.132 user 0m34.756s 00:20:08.132 sys 0m25.978s 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:08.132 ************************************ 00:20:08.132 END TEST nvmf_vfio_user_fuzz 00:20:08.132 ************************************ 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:08.132 ************************************ 00:20:08.132 START TEST nvmf_auth_target 00:20:08.132 ************************************ 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:08.132 * Looking for test storage... 00:20:08.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.132 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:08.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.133 --rc genhtml_branch_coverage=1 00:20:08.133 --rc genhtml_function_coverage=1 00:20:08.133 --rc genhtml_legend=1 00:20:08.133 --rc geninfo_all_blocks=1 00:20:08.133 --rc geninfo_unexecuted_blocks=1 00:20:08.133 00:20:08.133 ' 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:08.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.133 --rc genhtml_branch_coverage=1 00:20:08.133 --rc genhtml_function_coverage=1 00:20:08.133 --rc genhtml_legend=1 00:20:08.133 --rc geninfo_all_blocks=1 00:20:08.133 --rc geninfo_unexecuted_blocks=1 00:20:08.133 00:20:08.133 ' 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:08.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.133 --rc genhtml_branch_coverage=1 00:20:08.133 --rc genhtml_function_coverage=1 00:20:08.133 --rc genhtml_legend=1 00:20:08.133 --rc geninfo_all_blocks=1 00:20:08.133 --rc geninfo_unexecuted_blocks=1 00:20:08.133 00:20:08.133 ' 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:08.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.133 --rc genhtml_branch_coverage=1 00:20:08.133 --rc genhtml_function_coverage=1 00:20:08.133 --rc genhtml_legend=1 00:20:08.133 --rc geninfo_all_blocks=1 00:20:08.133 --rc geninfo_unexecuted_blocks=1 00:20:08.133 00:20:08.133 ' 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.133 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:08.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:08.133 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.805 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.805 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:14.805 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:14.805 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:14.805 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:14.805 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:14.805 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:14.805 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:14.805 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:14.806 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:14.806 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:14.806 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:14.806 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:14.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:20:14.806 00:20:14.806 --- 10.0.0.2 ping statistics --- 00:20:14.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.806 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:20:14.806 00:20:14.806 --- 10.0.0.1 ping statistics --- 00:20:14.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.806 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:14.806 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2809405 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2809405 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2809405 ']' 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:14.807 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2809519 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b1a888bed009723b26e6016ba2aed07ed127888550716fb 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.NsG 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b1a888bed009723b26e6016ba2aed07ed127888550716fb 0 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b1a888bed009723b26e6016ba2aed07ed127888550716fb 0 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b1a888bed009723b26e6016ba2aed07ed127888550716fb 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.NsG 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.NsG 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.NsG 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dc6239ef9cd8637398b7d9a7e2a7319e738d6389e1e23624b39f85069701d33f 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.UzS 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dc6239ef9cd8637398b7d9a7e2a7319e738d6389e1e23624b39f85069701d33f 3 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dc6239ef9cd8637398b7d9a7e2a7319e738d6389e1e23624b39f85069701d33f 3 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dc6239ef9cd8637398b7d9a7e2a7319e738d6389e1e23624b39f85069701d33f 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.UzS 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.UzS 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.UzS 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1e698c822099e8239020cad2ae047b94 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.M3K 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1e698c822099e8239020cad2ae047b94 1 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1e698c822099e8239020cad2ae047b94 1 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1e698c822099e8239020cad2ae047b94 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.M3K 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.M3K 00:20:15.381 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.M3K 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4d85ad803e36417aa82e7e48b4725feac238f2e4a813e0d0 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.j24 00:20:15.643 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4d85ad803e36417aa82e7e48b4725feac238f2e4a813e0d0 2 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4d85ad803e36417aa82e7e48b4725feac238f2e4a813e0d0 2 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4d85ad803e36417aa82e7e48b4725feac238f2e4a813e0d0 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.j24 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.j24 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.j24 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f9d8b444e75c33581f43177318531aa81ee79b8b115fefd5 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fsa 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f9d8b444e75c33581f43177318531aa81ee79b8b115fefd5 2 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f9d8b444e75c33581f43177318531aa81ee79b8b115fefd5 2 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f9d8b444e75c33581f43177318531aa81ee79b8b115fefd5 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fsa 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fsa 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.fsa 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9e02d83828265db585582a3380b34622 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.njj 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9e02d83828265db585582a3380b34622 1 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9e02d83828265db585582a3380b34622 1 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9e02d83828265db585582a3380b34622 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.njj 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.njj 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.njj 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ad2bdc51c07dd58ac1de105881d1adeb3b10b193bd48d226082f057a306e8a51 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.e9b 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ad2bdc51c07dd58ac1de105881d1adeb3b10b193bd48d226082f057a306e8a51 3 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ad2bdc51c07dd58ac1de105881d1adeb3b10b193bd48d226082f057a306e8a51 3 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ad2bdc51c07dd58ac1de105881d1adeb3b10b193bd48d226082f057a306e8a51 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:15.644 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.e9b 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.e9b 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.e9b 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2809405 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2809405 ']' 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:15.906 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.906 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:15.906 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:15.906 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2809519 /var/tmp/host.sock 00:20:15.906 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2809519 ']' 00:20:15.906 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:20:15.906 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:15.906 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:15.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:15.906 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:15.906 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NsG 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.NsG 00:20:16.167 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.NsG 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.UzS ]] 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UzS 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UzS 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UzS 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.M3K 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.428 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.688 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.688 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.M3K 00:20:16.688 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.M3K 00:20:16.688 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.j24 ]] 00:20:16.688 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.j24 00:20:16.689 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.689 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.689 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.689 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.j24 00:20:16.689 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.j24 00:20:16.949 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:16.949 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fsa 00:20:16.949 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.949 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.949 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.949 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.fsa 00:20:16.949 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.fsa 00:20:17.210 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.njj ]] 00:20:17.210 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.njj 00:20:17.210 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.210 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.210 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.210 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.njj 00:20:17.210 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.njj 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.e9b 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.e9b 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.e9b 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:17.471 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.731 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.991 00:20:17.991 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.991 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.991 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.252 { 00:20:18.252 "cntlid": 1, 00:20:18.252 "qid": 0, 00:20:18.252 "state": "enabled", 00:20:18.252 "thread": "nvmf_tgt_poll_group_000", 00:20:18.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:18.252 "listen_address": { 00:20:18.252 "trtype": "TCP", 00:20:18.252 "adrfam": "IPv4", 00:20:18.252 "traddr": "10.0.0.2", 00:20:18.252 "trsvcid": "4420" 00:20:18.252 }, 00:20:18.252 "peer_address": { 00:20:18.252 "trtype": "TCP", 00:20:18.252 "adrfam": "IPv4", 00:20:18.252 "traddr": "10.0.0.1", 00:20:18.252 "trsvcid": "57894" 00:20:18.252 }, 00:20:18.252 "auth": { 00:20:18.252 "state": "completed", 00:20:18.252 "digest": "sha256", 00:20:18.252 "dhgroup": "null" 00:20:18.252 } 00:20:18.252 } 00:20:18.252 ]' 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.252 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.513 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:18.513 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:19.086 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.086 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.086 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.086 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.086 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.086 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.086 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:19.086 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.348 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.608 00:20:19.608 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.608 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.608 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.870 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.870 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.870 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.870 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.870 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.870 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.870 { 00:20:19.870 "cntlid": 3, 00:20:19.870 "qid": 0, 00:20:19.870 "state": "enabled", 00:20:19.870 "thread": "nvmf_tgt_poll_group_000", 00:20:19.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:19.870 "listen_address": { 00:20:19.870 "trtype": "TCP", 00:20:19.870 "adrfam": "IPv4", 00:20:19.870 "traddr": "10.0.0.2", 00:20:19.870 "trsvcid": "4420" 00:20:19.870 }, 00:20:19.870 "peer_address": { 00:20:19.870 "trtype": "TCP", 00:20:19.870 "adrfam": "IPv4", 00:20:19.870 "traddr": "10.0.0.1", 00:20:19.870 "trsvcid": "57918" 00:20:19.870 }, 00:20:19.870 "auth": { 00:20:19.870 "state": "completed", 00:20:19.870 "digest": "sha256", 00:20:19.870 "dhgroup": "null" 00:20:19.870 } 00:20:19.870 } 00:20:19.870 ]' 00:20:19.870 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.870 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.870 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.870 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:19.870 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.870 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.870 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.870 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.130 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:20.131 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:20.700 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.700 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:20.700 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.700 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.700 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.700 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.700 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.700 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.960 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:20.960 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.960 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.960 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:20.960 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:20.960 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.960 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.961 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.961 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.961 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.961 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.961 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.961 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.221 00:20:21.221 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.221 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.221 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.482 { 00:20:21.482 "cntlid": 5, 00:20:21.482 "qid": 0, 00:20:21.482 "state": "enabled", 00:20:21.482 "thread": "nvmf_tgt_poll_group_000", 00:20:21.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:21.482 "listen_address": { 00:20:21.482 "trtype": "TCP", 00:20:21.482 "adrfam": "IPv4", 00:20:21.482 "traddr": "10.0.0.2", 00:20:21.482 "trsvcid": "4420" 00:20:21.482 }, 00:20:21.482 "peer_address": { 00:20:21.482 "trtype": "TCP", 00:20:21.482 "adrfam": "IPv4", 00:20:21.482 "traddr": "10.0.0.1", 00:20:21.482 "trsvcid": "57960" 00:20:21.482 }, 00:20:21.482 "auth": { 00:20:21.482 "state": "completed", 00:20:21.482 "digest": "sha256", 00:20:21.482 "dhgroup": "null" 00:20:21.482 } 00:20:21.482 } 00:20:21.482 ]' 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.482 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.743 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:21.743 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:22.313 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.574 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.834 00:20:22.834 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.834 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.834 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.095 { 00:20:23.095 "cntlid": 7, 00:20:23.095 "qid": 0, 00:20:23.095 "state": "enabled", 00:20:23.095 "thread": "nvmf_tgt_poll_group_000", 00:20:23.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:23.095 "listen_address": { 00:20:23.095 "trtype": "TCP", 00:20:23.095 "adrfam": "IPv4", 00:20:23.095 "traddr": "10.0.0.2", 00:20:23.095 "trsvcid": "4420" 00:20:23.095 }, 00:20:23.095 "peer_address": { 00:20:23.095 "trtype": "TCP", 00:20:23.095 "adrfam": "IPv4", 00:20:23.095 "traddr": "10.0.0.1", 00:20:23.095 "trsvcid": "57980" 00:20:23.095 }, 00:20:23.095 "auth": { 00:20:23.095 "state": "completed", 00:20:23.095 "digest": "sha256", 00:20:23.095 "dhgroup": "null" 00:20:23.095 } 00:20:23.095 } 00:20:23.095 ]' 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:23.095 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.355 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.355 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.355 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.355 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:23.355 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.294 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.295 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.295 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.295 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.295 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.295 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.554 00:20:24.554 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.554 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.554 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.554 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.554 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.554 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.554 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.813 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.813 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.813 { 00:20:24.813 "cntlid": 9, 00:20:24.813 "qid": 0, 00:20:24.813 "state": "enabled", 00:20:24.813 "thread": "nvmf_tgt_poll_group_000", 00:20:24.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:24.813 "listen_address": { 00:20:24.813 "trtype": "TCP", 00:20:24.813 "adrfam": "IPv4", 00:20:24.813 "traddr": "10.0.0.2", 00:20:24.813 "trsvcid": "4420" 00:20:24.813 }, 00:20:24.813 "peer_address": { 00:20:24.813 "trtype": "TCP", 00:20:24.813 "adrfam": "IPv4", 00:20:24.813 "traddr": "10.0.0.1", 00:20:24.813 "trsvcid": "58004" 00:20:24.813 }, 00:20:24.813 "auth": { 00:20:24.813 "state": "completed", 00:20:24.813 "digest": "sha256", 00:20:24.813 "dhgroup": "ffdhe2048" 00:20:24.813 } 00:20:24.813 } 00:20:24.813 ]' 00:20:24.813 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.813 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.814 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.814 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.814 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.814 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.814 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.814 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.074 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:25.074 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:25.644 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.644 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.644 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.644 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.644 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.644 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.644 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:25.644 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:25.904 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:25.904 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.904 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.904 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:25.904 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.904 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.904 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.904 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.904 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.904 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.904 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.904 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.904 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.163 00:20:26.163 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.163 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.163 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.163 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.163 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.163 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.163 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.163 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.163 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.163 { 00:20:26.163 "cntlid": 11, 00:20:26.163 "qid": 0, 00:20:26.163 "state": "enabled", 00:20:26.163 "thread": "nvmf_tgt_poll_group_000", 00:20:26.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:26.163 "listen_address": { 00:20:26.163 "trtype": "TCP", 00:20:26.163 "adrfam": "IPv4", 00:20:26.163 "traddr": "10.0.0.2", 00:20:26.163 "trsvcid": "4420" 00:20:26.163 }, 00:20:26.163 "peer_address": { 00:20:26.163 "trtype": "TCP", 00:20:26.163 "adrfam": "IPv4", 00:20:26.163 "traddr": "10.0.0.1", 00:20:26.163 "trsvcid": "35908" 00:20:26.163 }, 00:20:26.163 "auth": { 00:20:26.163 "state": "completed", 00:20:26.163 "digest": "sha256", 00:20:26.163 "dhgroup": "ffdhe2048" 00:20:26.163 } 00:20:26.163 } 00:20:26.163 ]' 00:20:26.163 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.422 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.422 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.422 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.422 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.422 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.422 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.422 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.681 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:26.681 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:27.252 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.252 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.252 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.252 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.252 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.252 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.252 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.253 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.514 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.774 00:20:27.774 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.774 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.774 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.774 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.774 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.774 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.774 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.033 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.033 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.033 { 00:20:28.033 "cntlid": 13, 00:20:28.033 "qid": 0, 00:20:28.033 "state": "enabled", 00:20:28.033 "thread": "nvmf_tgt_poll_group_000", 00:20:28.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:28.033 "listen_address": { 00:20:28.033 "trtype": "TCP", 00:20:28.033 "adrfam": "IPv4", 00:20:28.033 "traddr": "10.0.0.2", 00:20:28.033 "trsvcid": "4420" 00:20:28.033 }, 00:20:28.033 "peer_address": { 00:20:28.033 "trtype": "TCP", 00:20:28.033 "adrfam": "IPv4", 00:20:28.033 "traddr": "10.0.0.1", 00:20:28.033 "trsvcid": "35926" 00:20:28.033 }, 00:20:28.033 "auth": { 00:20:28.033 "state": "completed", 00:20:28.033 "digest": "sha256", 00:20:28.033 "dhgroup": "ffdhe2048" 00:20:28.033 } 00:20:28.033 } 00:20:28.033 ]' 00:20:28.033 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.033 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.033 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.033 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.033 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.033 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.033 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.033 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.292 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:28.292 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:28.901 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.901 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.901 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.901 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.901 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.901 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.901 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:28.901 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.231 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.231 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.516 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.516 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.516 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.516 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.516 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.516 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.516 { 00:20:29.516 "cntlid": 15, 00:20:29.516 "qid": 0, 00:20:29.516 "state": "enabled", 00:20:29.516 "thread": "nvmf_tgt_poll_group_000", 00:20:29.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:29.516 "listen_address": { 00:20:29.516 "trtype": "TCP", 00:20:29.516 "adrfam": "IPv4", 00:20:29.516 "traddr": "10.0.0.2", 00:20:29.516 "trsvcid": "4420" 00:20:29.516 }, 00:20:29.516 "peer_address": { 00:20:29.516 "trtype": "TCP", 00:20:29.516 "adrfam": "IPv4", 00:20:29.516 "traddr": "10.0.0.1", 00:20:29.516 "trsvcid": "35966" 00:20:29.516 }, 00:20:29.517 "auth": { 00:20:29.517 "state": "completed", 00:20:29.517 "digest": "sha256", 00:20:29.517 "dhgroup": "ffdhe2048" 00:20:29.517 } 00:20:29.517 } 00:20:29.517 ]' 00:20:29.517 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.517 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.517 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.517 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:29.517 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.778 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.778 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.778 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.778 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:29.778 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:30.350 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.612 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.873 00:20:30.873 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.873 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.873 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.133 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.133 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.133 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.133 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.133 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.133 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.133 { 00:20:31.133 "cntlid": 17, 00:20:31.133 "qid": 0, 00:20:31.133 "state": "enabled", 00:20:31.133 "thread": "nvmf_tgt_poll_group_000", 00:20:31.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:31.133 "listen_address": { 00:20:31.133 "trtype": "TCP", 00:20:31.133 "adrfam": "IPv4", 00:20:31.133 "traddr": "10.0.0.2", 00:20:31.133 "trsvcid": "4420" 00:20:31.134 }, 00:20:31.134 "peer_address": { 00:20:31.134 "trtype": "TCP", 00:20:31.134 "adrfam": "IPv4", 00:20:31.134 "traddr": "10.0.0.1", 00:20:31.134 "trsvcid": "36002" 00:20:31.134 }, 00:20:31.134 "auth": { 00:20:31.134 "state": "completed", 00:20:31.134 "digest": "sha256", 00:20:31.134 "dhgroup": "ffdhe3072" 00:20:31.134 } 00:20:31.134 } 00:20:31.134 ]' 00:20:31.134 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.134 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.134 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.134 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.134 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.394 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.394 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.394 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.394 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:31.394 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.336 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.596 00:20:32.596 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.596 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.596 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.856 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.856 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.856 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.856 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.856 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.856 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.856 { 00:20:32.856 "cntlid": 19, 00:20:32.856 "qid": 0, 00:20:32.856 "state": "enabled", 00:20:32.856 "thread": "nvmf_tgt_poll_group_000", 00:20:32.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:32.856 "listen_address": { 00:20:32.856 "trtype": "TCP", 00:20:32.856 "adrfam": "IPv4", 00:20:32.856 "traddr": "10.0.0.2", 00:20:32.856 "trsvcid": "4420" 00:20:32.856 }, 00:20:32.856 "peer_address": { 00:20:32.856 "trtype": "TCP", 00:20:32.856 "adrfam": "IPv4", 00:20:32.856 "traddr": "10.0.0.1", 00:20:32.856 "trsvcid": "36014" 00:20:32.856 }, 00:20:32.856 "auth": { 00:20:32.856 "state": "completed", 00:20:32.856 "digest": "sha256", 00:20:32.856 "dhgroup": "ffdhe3072" 00:20:32.856 } 00:20:32.856 } 00:20:32.856 ]' 00:20:32.856 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.856 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.856 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.856 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.856 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.856 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.856 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.856 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.116 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:33.117 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:33.686 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.686 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.686 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.686 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.686 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.686 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.686 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.686 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.946 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.947 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.947 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.207 00:20:34.207 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.207 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.207 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.468 { 00:20:34.468 "cntlid": 21, 00:20:34.468 "qid": 0, 00:20:34.468 "state": "enabled", 00:20:34.468 "thread": "nvmf_tgt_poll_group_000", 00:20:34.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:34.468 "listen_address": { 00:20:34.468 "trtype": "TCP", 00:20:34.468 "adrfam": "IPv4", 00:20:34.468 "traddr": "10.0.0.2", 00:20:34.468 "trsvcid": "4420" 00:20:34.468 }, 00:20:34.468 "peer_address": { 00:20:34.468 "trtype": "TCP", 00:20:34.468 "adrfam": "IPv4", 00:20:34.468 "traddr": "10.0.0.1", 00:20:34.468 "trsvcid": "36030" 00:20:34.468 }, 00:20:34.468 "auth": { 00:20:34.468 "state": "completed", 00:20:34.468 "digest": "sha256", 00:20:34.468 "dhgroup": "ffdhe3072" 00:20:34.468 } 00:20:34.468 } 00:20:34.468 ]' 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.468 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.728 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:34.729 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:35.299 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.299 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.299 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.299 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.299 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.299 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.299 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.299 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.559 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.819 00:20:35.819 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.819 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.819 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.080 { 00:20:36.080 "cntlid": 23, 00:20:36.080 "qid": 0, 00:20:36.080 "state": "enabled", 00:20:36.080 "thread": "nvmf_tgt_poll_group_000", 00:20:36.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:36.080 "listen_address": { 00:20:36.080 "trtype": "TCP", 00:20:36.080 "adrfam": "IPv4", 00:20:36.080 "traddr": "10.0.0.2", 00:20:36.080 "trsvcid": "4420" 00:20:36.080 }, 00:20:36.080 "peer_address": { 00:20:36.080 "trtype": "TCP", 00:20:36.080 "adrfam": "IPv4", 00:20:36.080 "traddr": "10.0.0.1", 00:20:36.080 "trsvcid": "53886" 00:20:36.080 }, 00:20:36.080 "auth": { 00:20:36.080 "state": "completed", 00:20:36.080 "digest": "sha256", 00:20:36.080 "dhgroup": "ffdhe3072" 00:20:36.080 } 00:20:36.080 } 00:20:36.080 ]' 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.080 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.340 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:36.340 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:36.911 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.911 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.911 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.911 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.911 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.911 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.911 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.911 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:36.911 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.172 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.432 00:20:37.432 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.432 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.432 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.693 { 00:20:37.693 "cntlid": 25, 00:20:37.693 "qid": 0, 00:20:37.693 "state": "enabled", 00:20:37.693 "thread": "nvmf_tgt_poll_group_000", 00:20:37.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:37.693 "listen_address": { 00:20:37.693 "trtype": "TCP", 00:20:37.693 "adrfam": "IPv4", 00:20:37.693 "traddr": "10.0.0.2", 00:20:37.693 "trsvcid": "4420" 00:20:37.693 }, 00:20:37.693 "peer_address": { 00:20:37.693 "trtype": "TCP", 00:20:37.693 "adrfam": "IPv4", 00:20:37.693 "traddr": "10.0.0.1", 00:20:37.693 "trsvcid": "53902" 00:20:37.693 }, 00:20:37.693 "auth": { 00:20:37.693 "state": "completed", 00:20:37.693 "digest": "sha256", 00:20:37.693 "dhgroup": "ffdhe4096" 00:20:37.693 } 00:20:37.693 } 00:20:37.693 ]' 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.693 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.953 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.953 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.953 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.953 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:37.954 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:38.894 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.894 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:38.894 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.894 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.894 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.894 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.894 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:38.894 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:38.894 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:38.894 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.894 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.894 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.894 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.894 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.894 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.894 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.894 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.894 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.895 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.895 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.895 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.153 00:20:39.153 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.153 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.153 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.414 { 00:20:39.414 "cntlid": 27, 00:20:39.414 "qid": 0, 00:20:39.414 "state": "enabled", 00:20:39.414 "thread": "nvmf_tgt_poll_group_000", 00:20:39.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:39.414 "listen_address": { 00:20:39.414 "trtype": "TCP", 00:20:39.414 "adrfam": "IPv4", 00:20:39.414 "traddr": "10.0.0.2", 00:20:39.414 "trsvcid": "4420" 00:20:39.414 }, 00:20:39.414 "peer_address": { 00:20:39.414 "trtype": "TCP", 00:20:39.414 "adrfam": "IPv4", 00:20:39.414 "traddr": "10.0.0.1", 00:20:39.414 "trsvcid": "53920" 00:20:39.414 }, 00:20:39.414 "auth": { 00:20:39.414 "state": "completed", 00:20:39.414 "digest": "sha256", 00:20:39.414 "dhgroup": "ffdhe4096" 00:20:39.414 } 00:20:39.414 } 00:20:39.414 ]' 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.414 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.673 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:39.673 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:40.244 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.244 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.244 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.244 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.244 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.244 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.244 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:40.244 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.504 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.765 00:20:40.765 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.765 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.765 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.026 { 00:20:41.026 "cntlid": 29, 00:20:41.026 "qid": 0, 00:20:41.026 "state": "enabled", 00:20:41.026 "thread": "nvmf_tgt_poll_group_000", 00:20:41.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:41.026 "listen_address": { 00:20:41.026 "trtype": "TCP", 00:20:41.026 "adrfam": "IPv4", 00:20:41.026 "traddr": "10.0.0.2", 00:20:41.026 "trsvcid": "4420" 00:20:41.026 }, 00:20:41.026 "peer_address": { 00:20:41.026 "trtype": "TCP", 00:20:41.026 "adrfam": "IPv4", 00:20:41.026 "traddr": "10.0.0.1", 00:20:41.026 "trsvcid": "53944" 00:20:41.026 }, 00:20:41.026 "auth": { 00:20:41.026 "state": "completed", 00:20:41.026 "digest": "sha256", 00:20:41.026 "dhgroup": "ffdhe4096" 00:20:41.026 } 00:20:41.026 } 00:20:41.026 ]' 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.026 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.286 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:41.286 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:41.856 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.856 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.856 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.856 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.856 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.856 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.856 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:41.856 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.116 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.375 00:20:42.375 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.375 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.375 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.634 { 00:20:42.634 "cntlid": 31, 00:20:42.634 "qid": 0, 00:20:42.634 "state": "enabled", 00:20:42.634 "thread": "nvmf_tgt_poll_group_000", 00:20:42.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:42.634 "listen_address": { 00:20:42.634 "trtype": "TCP", 00:20:42.634 "adrfam": "IPv4", 00:20:42.634 "traddr": "10.0.0.2", 00:20:42.634 "trsvcid": "4420" 00:20:42.634 }, 00:20:42.634 "peer_address": { 00:20:42.634 "trtype": "TCP", 00:20:42.634 "adrfam": "IPv4", 00:20:42.634 "traddr": "10.0.0.1", 00:20:42.634 "trsvcid": "53964" 00:20:42.634 }, 00:20:42.634 "auth": { 00:20:42.634 "state": "completed", 00:20:42.634 "digest": "sha256", 00:20:42.634 "dhgroup": "ffdhe4096" 00:20:42.634 } 00:20:42.634 } 00:20:42.634 ]' 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.634 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.635 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.635 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.635 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.895 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:42.896 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:43.466 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.466 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:43.466 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.466 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.466 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.466 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.466 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.466 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:43.466 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.726 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.986 00:20:43.986 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.986 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.986 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.244 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.244 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.245 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.245 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.245 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.245 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.245 { 00:20:44.245 "cntlid": 33, 00:20:44.245 "qid": 0, 00:20:44.245 "state": "enabled", 00:20:44.245 "thread": "nvmf_tgt_poll_group_000", 00:20:44.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:44.245 "listen_address": { 00:20:44.245 "trtype": "TCP", 00:20:44.245 "adrfam": "IPv4", 00:20:44.245 "traddr": "10.0.0.2", 00:20:44.245 "trsvcid": "4420" 00:20:44.245 }, 00:20:44.245 "peer_address": { 00:20:44.245 "trtype": "TCP", 00:20:44.245 "adrfam": "IPv4", 00:20:44.245 "traddr": "10.0.0.1", 00:20:44.245 "trsvcid": "53988" 00:20:44.245 }, 00:20:44.245 "auth": { 00:20:44.245 "state": "completed", 00:20:44.245 "digest": "sha256", 00:20:44.245 "dhgroup": "ffdhe6144" 00:20:44.245 } 00:20:44.245 } 00:20:44.245 ]' 00:20:44.245 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.245 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.245 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.504 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.504 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.504 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.504 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.504 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.504 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:44.504 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.445 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.706 00:20:45.706 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.706 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.706 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.966 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.966 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.966 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.966 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.966 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.966 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.966 { 00:20:45.966 "cntlid": 35, 00:20:45.966 "qid": 0, 00:20:45.966 "state": "enabled", 00:20:45.966 "thread": "nvmf_tgt_poll_group_000", 00:20:45.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:45.966 "listen_address": { 00:20:45.966 "trtype": "TCP", 00:20:45.966 "adrfam": "IPv4", 00:20:45.966 "traddr": "10.0.0.2", 00:20:45.966 "trsvcid": "4420" 00:20:45.966 }, 00:20:45.966 "peer_address": { 00:20:45.966 "trtype": "TCP", 00:20:45.966 "adrfam": "IPv4", 00:20:45.966 "traddr": "10.0.0.1", 00:20:45.966 "trsvcid": "40882" 00:20:45.966 }, 00:20:45.966 "auth": { 00:20:45.966 "state": "completed", 00:20:45.966 "digest": "sha256", 00:20:45.966 "dhgroup": "ffdhe6144" 00:20:45.966 } 00:20:45.966 } 00:20:45.966 ]' 00:20:45.966 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.966 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.966 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.226 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.226 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.226 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.226 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.226 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.226 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:46.226 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.181 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.442 00:20:47.442 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.442 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.442 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.702 { 00:20:47.702 "cntlid": 37, 00:20:47.702 "qid": 0, 00:20:47.702 "state": "enabled", 00:20:47.702 "thread": "nvmf_tgt_poll_group_000", 00:20:47.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:47.702 "listen_address": { 00:20:47.702 "trtype": "TCP", 00:20:47.702 "adrfam": "IPv4", 00:20:47.702 "traddr": "10.0.0.2", 00:20:47.702 "trsvcid": "4420" 00:20:47.702 }, 00:20:47.702 "peer_address": { 00:20:47.702 "trtype": "TCP", 00:20:47.702 "adrfam": "IPv4", 00:20:47.702 "traddr": "10.0.0.1", 00:20:47.702 "trsvcid": "40906" 00:20:47.702 }, 00:20:47.702 "auth": { 00:20:47.702 "state": "completed", 00:20:47.702 "digest": "sha256", 00:20:47.702 "dhgroup": "ffdhe6144" 00:20:47.702 } 00:20:47.702 } 00:20:47.702 ]' 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.702 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.963 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.963 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.963 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.963 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:47.963 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:48.913 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.913 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.913 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.913 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.913 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.913 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.913 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.913 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.913 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.173 00:20:49.173 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.173 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.173 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.434 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.434 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.434 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.434 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.435 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.435 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.435 { 00:20:49.435 "cntlid": 39, 00:20:49.435 "qid": 0, 00:20:49.435 "state": "enabled", 00:20:49.435 "thread": "nvmf_tgt_poll_group_000", 00:20:49.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:49.435 "listen_address": { 00:20:49.435 "trtype": "TCP", 00:20:49.435 "adrfam": "IPv4", 00:20:49.435 "traddr": "10.0.0.2", 00:20:49.435 "trsvcid": "4420" 00:20:49.435 }, 00:20:49.435 "peer_address": { 00:20:49.435 "trtype": "TCP", 00:20:49.435 "adrfam": "IPv4", 00:20:49.435 "traddr": "10.0.0.1", 00:20:49.435 "trsvcid": "40938" 00:20:49.435 }, 00:20:49.435 "auth": { 00:20:49.435 "state": "completed", 00:20:49.435 "digest": "sha256", 00:20:49.435 "dhgroup": "ffdhe6144" 00:20:49.435 } 00:20:49.435 } 00:20:49.435 ]' 00:20:49.435 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.435 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.435 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.695 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.695 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.695 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.695 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.695 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.695 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:49.695 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.633 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.203 00:20:51.203 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.203 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.203 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.203 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.203 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.203 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.203 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.203 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.203 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.203 { 00:20:51.203 "cntlid": 41, 00:20:51.203 "qid": 0, 00:20:51.203 "state": "enabled", 00:20:51.203 "thread": "nvmf_tgt_poll_group_000", 00:20:51.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:51.203 "listen_address": { 00:20:51.203 "trtype": "TCP", 00:20:51.203 "adrfam": "IPv4", 00:20:51.203 "traddr": "10.0.0.2", 00:20:51.203 "trsvcid": "4420" 00:20:51.203 }, 00:20:51.203 "peer_address": { 00:20:51.203 "trtype": "TCP", 00:20:51.203 "adrfam": "IPv4", 00:20:51.203 "traddr": "10.0.0.1", 00:20:51.203 "trsvcid": "40960" 00:20:51.203 }, 00:20:51.203 "auth": { 00:20:51.203 "state": "completed", 00:20:51.203 "digest": "sha256", 00:20:51.203 "dhgroup": "ffdhe8192" 00:20:51.203 } 00:20:51.203 } 00:20:51.203 ]' 00:20:51.203 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.464 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.464 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.464 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.465 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.465 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.465 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.465 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.725 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:51.725 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:52.293 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.293 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.293 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.293 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.293 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.293 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.293 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.293 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.552 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.811 00:20:53.071 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.071 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.071 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.071 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.071 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.071 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.071 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.071 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.071 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.071 { 00:20:53.071 "cntlid": 43, 00:20:53.071 "qid": 0, 00:20:53.071 "state": "enabled", 00:20:53.071 "thread": "nvmf_tgt_poll_group_000", 00:20:53.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:53.071 "listen_address": { 00:20:53.071 "trtype": "TCP", 00:20:53.071 "adrfam": "IPv4", 00:20:53.071 "traddr": "10.0.0.2", 00:20:53.071 "trsvcid": "4420" 00:20:53.071 }, 00:20:53.071 "peer_address": { 00:20:53.072 "trtype": "TCP", 00:20:53.072 "adrfam": "IPv4", 00:20:53.072 "traddr": "10.0.0.1", 00:20:53.072 "trsvcid": "40980" 00:20:53.072 }, 00:20:53.072 "auth": { 00:20:53.072 "state": "completed", 00:20:53.072 "digest": "sha256", 00:20:53.072 "dhgroup": "ffdhe8192" 00:20:53.072 } 00:20:53.072 } 00:20:53.072 ]' 00:20:53.072 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.334 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.334 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.334 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.334 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.334 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.334 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.334 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.595 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:53.595 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:20:54.164 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.164 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:54.164 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.164 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.164 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.164 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.164 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.164 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.424 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:54.424 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.424 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:54.424 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:54.424 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.424 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.424 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.425 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.425 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.425 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.425 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.425 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.425 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.685 00:20:54.946 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.946 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.946 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.946 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.946 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.946 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.946 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.946 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.946 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.946 { 00:20:54.946 "cntlid": 45, 00:20:54.946 "qid": 0, 00:20:54.946 "state": "enabled", 00:20:54.946 "thread": "nvmf_tgt_poll_group_000", 00:20:54.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:54.946 "listen_address": { 00:20:54.946 "trtype": "TCP", 00:20:54.946 "adrfam": "IPv4", 00:20:54.946 "traddr": "10.0.0.2", 00:20:54.946 "trsvcid": "4420" 00:20:54.946 }, 00:20:54.946 "peer_address": { 00:20:54.946 "trtype": "TCP", 00:20:54.946 "adrfam": "IPv4", 00:20:54.946 "traddr": "10.0.0.1", 00:20:54.946 "trsvcid": "49728" 00:20:54.946 }, 00:20:54.946 "auth": { 00:20:54.946 "state": "completed", 00:20:54.946 "digest": "sha256", 00:20:54.946 "dhgroup": "ffdhe8192" 00:20:54.946 } 00:20:54.946 } 00:20:54.946 ]' 00:20:54.946 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.946 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.946 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.208 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.208 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.208 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.208 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.208 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.208 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:55.208 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:20:56.148 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.149 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.718 00:20:56.718 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.718 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.718 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.718 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.718 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.718 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.718 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.980 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.980 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.980 { 00:20:56.980 "cntlid": 47, 00:20:56.980 "qid": 0, 00:20:56.980 "state": "enabled", 00:20:56.980 "thread": "nvmf_tgt_poll_group_000", 00:20:56.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:56.980 "listen_address": { 00:20:56.980 "trtype": "TCP", 00:20:56.980 "adrfam": "IPv4", 00:20:56.980 "traddr": "10.0.0.2", 00:20:56.980 "trsvcid": "4420" 00:20:56.980 }, 00:20:56.980 "peer_address": { 00:20:56.980 "trtype": "TCP", 00:20:56.980 "adrfam": "IPv4", 00:20:56.980 "traddr": "10.0.0.1", 00:20:56.980 "trsvcid": "49758" 00:20:56.980 }, 00:20:56.980 "auth": { 00:20:56.980 "state": "completed", 00:20:56.980 "digest": "sha256", 00:20:56.980 "dhgroup": "ffdhe8192" 00:20:56.980 } 00:20:56.980 } 00:20:56.980 ]' 00:20:56.980 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.980 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.980 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.980 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.980 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.980 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.980 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.980 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.240 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:57.240 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:20:57.812 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.812 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.812 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.812 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.812 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.812 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:57.812 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.812 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.812 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.812 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.073 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.333 00:20:58.333 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.333 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.333 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.333 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.333 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.333 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.333 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.333 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.333 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.333 { 00:20:58.333 "cntlid": 49, 00:20:58.333 "qid": 0, 00:20:58.333 "state": "enabled", 00:20:58.333 "thread": "nvmf_tgt_poll_group_000", 00:20:58.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:58.333 "listen_address": { 00:20:58.333 "trtype": "TCP", 00:20:58.333 "adrfam": "IPv4", 00:20:58.333 "traddr": "10.0.0.2", 00:20:58.333 "trsvcid": "4420" 00:20:58.333 }, 00:20:58.333 "peer_address": { 00:20:58.333 "trtype": "TCP", 00:20:58.333 "adrfam": "IPv4", 00:20:58.333 "traddr": "10.0.0.1", 00:20:58.333 "trsvcid": "49796" 00:20:58.333 }, 00:20:58.333 "auth": { 00:20:58.334 "state": "completed", 00:20:58.334 "digest": "sha384", 00:20:58.334 "dhgroup": "null" 00:20:58.334 } 00:20:58.334 } 00:20:58.334 ]' 00:20:58.334 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.593 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.593 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.593 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.593 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.594 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.594 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.594 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.854 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:58.854 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:20:59.428 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.428 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:59.428 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.428 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.428 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.428 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.428 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:59.428 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:59.689 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.690 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.950 00:20:59.950 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.950 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.950 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.950 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.950 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.950 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.950 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.950 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.950 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.950 { 00:20:59.950 "cntlid": 51, 00:20:59.950 "qid": 0, 00:20:59.950 "state": "enabled", 00:20:59.950 "thread": "nvmf_tgt_poll_group_000", 00:20:59.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:59.950 "listen_address": { 00:20:59.950 "trtype": "TCP", 00:20:59.950 "adrfam": "IPv4", 00:20:59.950 "traddr": "10.0.0.2", 00:20:59.950 "trsvcid": "4420" 00:20:59.950 }, 00:20:59.950 "peer_address": { 00:20:59.950 "trtype": "TCP", 00:20:59.950 "adrfam": "IPv4", 00:20:59.950 "traddr": "10.0.0.1", 00:20:59.950 "trsvcid": "49810" 00:20:59.950 }, 00:20:59.950 "auth": { 00:20:59.950 "state": "completed", 00:20:59.950 "digest": "sha384", 00:20:59.950 "dhgroup": "null" 00:20:59.950 } 00:20:59.950 } 00:20:59.950 ]' 00:20:59.950 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.211 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.211 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.211 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:00.211 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.211 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.211 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.211 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.471 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:00.471 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:01.043 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.043 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.043 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.043 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.043 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.043 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.043 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.043 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.304 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.564 00:21:01.564 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.564 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.564 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.564 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.564 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.564 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.564 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.564 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.823 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.823 { 00:21:01.823 "cntlid": 53, 00:21:01.823 "qid": 0, 00:21:01.823 "state": "enabled", 00:21:01.823 "thread": "nvmf_tgt_poll_group_000", 00:21:01.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:01.823 "listen_address": { 00:21:01.823 "trtype": "TCP", 00:21:01.823 "adrfam": "IPv4", 00:21:01.823 "traddr": "10.0.0.2", 00:21:01.823 "trsvcid": "4420" 00:21:01.823 }, 00:21:01.823 "peer_address": { 00:21:01.823 "trtype": "TCP", 00:21:01.823 "adrfam": "IPv4", 00:21:01.823 "traddr": "10.0.0.1", 00:21:01.823 "trsvcid": "49826" 00:21:01.823 }, 00:21:01.823 "auth": { 00:21:01.824 "state": "completed", 00:21:01.824 "digest": "sha384", 00:21:01.824 "dhgroup": "null" 00:21:01.824 } 00:21:01.824 } 00:21:01.824 ]' 00:21:01.824 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.824 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.824 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.824 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.824 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.824 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.824 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.824 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.083 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:02.083 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:02.652 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.652 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:02.652 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.652 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.652 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.652 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.652 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.652 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.913 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.174 00:21:03.174 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.174 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.174 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.435 { 00:21:03.435 "cntlid": 55, 00:21:03.435 "qid": 0, 00:21:03.435 "state": "enabled", 00:21:03.435 "thread": "nvmf_tgt_poll_group_000", 00:21:03.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:03.435 "listen_address": { 00:21:03.435 "trtype": "TCP", 00:21:03.435 "adrfam": "IPv4", 00:21:03.435 "traddr": "10.0.0.2", 00:21:03.435 "trsvcid": "4420" 00:21:03.435 }, 00:21:03.435 "peer_address": { 00:21:03.435 "trtype": "TCP", 00:21:03.435 "adrfam": "IPv4", 00:21:03.435 "traddr": "10.0.0.1", 00:21:03.435 "trsvcid": "49860" 00:21:03.435 }, 00:21:03.435 "auth": { 00:21:03.435 "state": "completed", 00:21:03.435 "digest": "sha384", 00:21:03.435 "dhgroup": "null" 00:21:03.435 } 00:21:03.435 } 00:21:03.435 ]' 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.435 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.696 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:03.696 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:04.265 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.265 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:04.265 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.265 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.265 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.265 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.265 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.265 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.265 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.525 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.785 00:21:04.785 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.785 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.785 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.785 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.785 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.785 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.785 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.785 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.785 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.785 { 00:21:04.785 "cntlid": 57, 00:21:04.785 "qid": 0, 00:21:04.785 "state": "enabled", 00:21:04.785 "thread": "nvmf_tgt_poll_group_000", 00:21:04.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:04.785 "listen_address": { 00:21:04.785 "trtype": "TCP", 00:21:04.785 "adrfam": "IPv4", 00:21:04.785 "traddr": "10.0.0.2", 00:21:04.785 "trsvcid": "4420" 00:21:04.785 }, 00:21:04.785 "peer_address": { 00:21:04.785 "trtype": "TCP", 00:21:04.785 "adrfam": "IPv4", 00:21:04.785 "traddr": "10.0.0.1", 00:21:04.785 "trsvcid": "40958" 00:21:04.785 }, 00:21:04.785 "auth": { 00:21:04.785 "state": "completed", 00:21:04.785 "digest": "sha384", 00:21:04.785 "dhgroup": "ffdhe2048" 00:21:04.785 } 00:21:04.785 } 00:21:04.785 ]' 00:21:04.785 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.045 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.045 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.045 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.045 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.045 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.045 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.045 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.305 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:05.305 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:05.875 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.875 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.875 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.875 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.875 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.875 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.875 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:05.875 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.134 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.393 00:21:06.393 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.393 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.393 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.393 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.393 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.393 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.393 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.394 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.394 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.394 { 00:21:06.394 "cntlid": 59, 00:21:06.394 "qid": 0, 00:21:06.394 "state": "enabled", 00:21:06.394 "thread": "nvmf_tgt_poll_group_000", 00:21:06.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:06.394 "listen_address": { 00:21:06.394 "trtype": "TCP", 00:21:06.394 "adrfam": "IPv4", 00:21:06.394 "traddr": "10.0.0.2", 00:21:06.394 "trsvcid": "4420" 00:21:06.394 }, 00:21:06.394 "peer_address": { 00:21:06.394 "trtype": "TCP", 00:21:06.394 "adrfam": "IPv4", 00:21:06.394 "traddr": "10.0.0.1", 00:21:06.394 "trsvcid": "40984" 00:21:06.394 }, 00:21:06.394 "auth": { 00:21:06.394 "state": "completed", 00:21:06.394 "digest": "sha384", 00:21:06.394 "dhgroup": "ffdhe2048" 00:21:06.394 } 00:21:06.394 } 00:21:06.394 ]' 00:21:06.394 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.653 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.653 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.653 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.653 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.653 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.653 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.653 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.914 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:06.914 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:07.533 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.533 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.533 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.533 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.533 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.533 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.533 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.533 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.834 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:07.834 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.834 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.834 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.834 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:07.834 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.835 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.835 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.835 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.835 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.835 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.835 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.835 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.835 00:21:07.835 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.835 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.835 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.125 { 00:21:08.125 "cntlid": 61, 00:21:08.125 "qid": 0, 00:21:08.125 "state": "enabled", 00:21:08.125 "thread": "nvmf_tgt_poll_group_000", 00:21:08.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:08.125 "listen_address": { 00:21:08.125 "trtype": "TCP", 00:21:08.125 "adrfam": "IPv4", 00:21:08.125 "traddr": "10.0.0.2", 00:21:08.125 "trsvcid": "4420" 00:21:08.125 }, 00:21:08.125 "peer_address": { 00:21:08.125 "trtype": "TCP", 00:21:08.125 "adrfam": "IPv4", 00:21:08.125 "traddr": "10.0.0.1", 00:21:08.125 "trsvcid": "41010" 00:21:08.125 }, 00:21:08.125 "auth": { 00:21:08.125 "state": "completed", 00:21:08.125 "digest": "sha384", 00:21:08.125 "dhgroup": "ffdhe2048" 00:21:08.125 } 00:21:08.125 } 00:21:08.125 ]' 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.125 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.385 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:08.386 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:08.955 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.955 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.955 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.955 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.215 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.475 00:21:09.475 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.475 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.475 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.736 { 00:21:09.736 "cntlid": 63, 00:21:09.736 "qid": 0, 00:21:09.736 "state": "enabled", 00:21:09.736 "thread": "nvmf_tgt_poll_group_000", 00:21:09.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:09.736 "listen_address": { 00:21:09.736 "trtype": "TCP", 00:21:09.736 "adrfam": "IPv4", 00:21:09.736 "traddr": "10.0.0.2", 00:21:09.736 "trsvcid": "4420" 00:21:09.736 }, 00:21:09.736 "peer_address": { 00:21:09.736 "trtype": "TCP", 00:21:09.736 "adrfam": "IPv4", 00:21:09.736 "traddr": "10.0.0.1", 00:21:09.736 "trsvcid": "41042" 00:21:09.736 }, 00:21:09.736 "auth": { 00:21:09.736 "state": "completed", 00:21:09.736 "digest": "sha384", 00:21:09.736 "dhgroup": "ffdhe2048" 00:21:09.736 } 00:21:09.736 } 00:21:09.736 ]' 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.736 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.736 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.736 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.994 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:09.994 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:10.563 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.563 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.563 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.563 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.563 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.563 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.563 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.563 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:10.563 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.823 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.082 00:21:11.082 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.082 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.082 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.343 { 00:21:11.343 "cntlid": 65, 00:21:11.343 "qid": 0, 00:21:11.343 "state": "enabled", 00:21:11.343 "thread": "nvmf_tgt_poll_group_000", 00:21:11.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:11.343 "listen_address": { 00:21:11.343 "trtype": "TCP", 00:21:11.343 "adrfam": "IPv4", 00:21:11.343 "traddr": "10.0.0.2", 00:21:11.343 "trsvcid": "4420" 00:21:11.343 }, 00:21:11.343 "peer_address": { 00:21:11.343 "trtype": "TCP", 00:21:11.343 "adrfam": "IPv4", 00:21:11.343 "traddr": "10.0.0.1", 00:21:11.343 "trsvcid": "41084" 00:21:11.343 }, 00:21:11.343 "auth": { 00:21:11.343 "state": "completed", 00:21:11.343 "digest": "sha384", 00:21:11.343 "dhgroup": "ffdhe3072" 00:21:11.343 } 00:21:11.343 } 00:21:11.343 ]' 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.343 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.604 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:11.604 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:12.174 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.174 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:12.174 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.174 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.433 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.434 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.434 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.434 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.434 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.693 00:21:12.693 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.693 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.693 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.953 { 00:21:12.953 "cntlid": 67, 00:21:12.953 "qid": 0, 00:21:12.953 "state": "enabled", 00:21:12.953 "thread": "nvmf_tgt_poll_group_000", 00:21:12.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:12.953 "listen_address": { 00:21:12.953 "trtype": "TCP", 00:21:12.953 "adrfam": "IPv4", 00:21:12.953 "traddr": "10.0.0.2", 00:21:12.953 "trsvcid": "4420" 00:21:12.953 }, 00:21:12.953 "peer_address": { 00:21:12.953 "trtype": "TCP", 00:21:12.953 "adrfam": "IPv4", 00:21:12.953 "traddr": "10.0.0.1", 00:21:12.953 "trsvcid": "41108" 00:21:12.953 }, 00:21:12.953 "auth": { 00:21:12.953 "state": "completed", 00:21:12.953 "digest": "sha384", 00:21:12.953 "dhgroup": "ffdhe3072" 00:21:12.953 } 00:21:12.953 } 00:21:12.953 ]' 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.953 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.214 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:13.214 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:13.786 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.786 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.786 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.786 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.786 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.786 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.786 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:13.786 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.046 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.306 00:21:14.306 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.306 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.306 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.566 { 00:21:14.566 "cntlid": 69, 00:21:14.566 "qid": 0, 00:21:14.566 "state": "enabled", 00:21:14.566 "thread": "nvmf_tgt_poll_group_000", 00:21:14.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:14.566 "listen_address": { 00:21:14.566 "trtype": "TCP", 00:21:14.566 "adrfam": "IPv4", 00:21:14.566 "traddr": "10.0.0.2", 00:21:14.566 "trsvcid": "4420" 00:21:14.566 }, 00:21:14.566 "peer_address": { 00:21:14.566 "trtype": "TCP", 00:21:14.566 "adrfam": "IPv4", 00:21:14.566 "traddr": "10.0.0.1", 00:21:14.566 "trsvcid": "41126" 00:21:14.566 }, 00:21:14.566 "auth": { 00:21:14.566 "state": "completed", 00:21:14.566 "digest": "sha384", 00:21:14.566 "dhgroup": "ffdhe3072" 00:21:14.566 } 00:21:14.566 } 00:21:14.566 ]' 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.566 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:14.827 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:15.399 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.399 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.399 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.399 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.399 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.399 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.399 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.399 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.661 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.922 00:21:15.922 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.922 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.922 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.183 { 00:21:16.183 "cntlid": 71, 00:21:16.183 "qid": 0, 00:21:16.183 "state": "enabled", 00:21:16.183 "thread": "nvmf_tgt_poll_group_000", 00:21:16.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:16.183 "listen_address": { 00:21:16.183 "trtype": "TCP", 00:21:16.183 "adrfam": "IPv4", 00:21:16.183 "traddr": "10.0.0.2", 00:21:16.183 "trsvcid": "4420" 00:21:16.183 }, 00:21:16.183 "peer_address": { 00:21:16.183 "trtype": "TCP", 00:21:16.183 "adrfam": "IPv4", 00:21:16.183 "traddr": "10.0.0.1", 00:21:16.183 "trsvcid": "58438" 00:21:16.183 }, 00:21:16.183 "auth": { 00:21:16.183 "state": "completed", 00:21:16.183 "digest": "sha384", 00:21:16.183 "dhgroup": "ffdhe3072" 00:21:16.183 } 00:21:16.183 } 00:21:16.183 ]' 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.183 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.444 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.444 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.444 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.444 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:16.444 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:17.014 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.014 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.014 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.014 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.014 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.014 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.014 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.275 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.536 00:21:17.536 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.536 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.536 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.796 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.796 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.796 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.796 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.796 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.796 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.796 { 00:21:17.796 "cntlid": 73, 00:21:17.796 "qid": 0, 00:21:17.796 "state": "enabled", 00:21:17.796 "thread": "nvmf_tgt_poll_group_000", 00:21:17.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:17.796 "listen_address": { 00:21:17.796 "trtype": "TCP", 00:21:17.796 "adrfam": "IPv4", 00:21:17.796 "traddr": "10.0.0.2", 00:21:17.796 "trsvcid": "4420" 00:21:17.796 }, 00:21:17.796 "peer_address": { 00:21:17.796 "trtype": "TCP", 00:21:17.796 "adrfam": "IPv4", 00:21:17.796 "traddr": "10.0.0.1", 00:21:17.796 "trsvcid": "58458" 00:21:17.796 }, 00:21:17.796 "auth": { 00:21:17.796 "state": "completed", 00:21:17.796 "digest": "sha384", 00:21:17.796 "dhgroup": "ffdhe4096" 00:21:17.796 } 00:21:17.796 } 00:21:17.796 ]' 00:21:17.796 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.796 06:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.796 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.796 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.796 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.057 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.057 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.057 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.057 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:18.057 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:18.625 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.884 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:18.884 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.884 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.884 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.884 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.884 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:18.884 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.884 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.143 00:21:19.143 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.143 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.143 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.404 { 00:21:19.404 "cntlid": 75, 00:21:19.404 "qid": 0, 00:21:19.404 "state": "enabled", 00:21:19.404 "thread": "nvmf_tgt_poll_group_000", 00:21:19.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:19.404 "listen_address": { 00:21:19.404 "trtype": "TCP", 00:21:19.404 "adrfam": "IPv4", 00:21:19.404 "traddr": "10.0.0.2", 00:21:19.404 "trsvcid": "4420" 00:21:19.404 }, 00:21:19.404 "peer_address": { 00:21:19.404 "trtype": "TCP", 00:21:19.404 "adrfam": "IPv4", 00:21:19.404 "traddr": "10.0.0.1", 00:21:19.404 "trsvcid": "58494" 00:21:19.404 }, 00:21:19.404 "auth": { 00:21:19.404 "state": "completed", 00:21:19.404 "digest": "sha384", 00:21:19.404 "dhgroup": "ffdhe4096" 00:21:19.404 } 00:21:19.404 } 00:21:19.404 ]' 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.404 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.664 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.664 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.664 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.664 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:19.664 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.604 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.863 00:21:20.863 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.863 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.863 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.124 { 00:21:21.124 "cntlid": 77, 00:21:21.124 "qid": 0, 00:21:21.124 "state": "enabled", 00:21:21.124 "thread": "nvmf_tgt_poll_group_000", 00:21:21.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:21.124 "listen_address": { 00:21:21.124 "trtype": "TCP", 00:21:21.124 "adrfam": "IPv4", 00:21:21.124 "traddr": "10.0.0.2", 00:21:21.124 "trsvcid": "4420" 00:21:21.124 }, 00:21:21.124 "peer_address": { 00:21:21.124 "trtype": "TCP", 00:21:21.124 "adrfam": "IPv4", 00:21:21.124 "traddr": "10.0.0.1", 00:21:21.124 "trsvcid": "58522" 00:21:21.124 }, 00:21:21.124 "auth": { 00:21:21.124 "state": "completed", 00:21:21.124 "digest": "sha384", 00:21:21.124 "dhgroup": "ffdhe4096" 00:21:21.124 } 00:21:21.124 } 00:21:21.124 ]' 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.124 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.384 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:21.384 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:21.974 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.974 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:21.974 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.974 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.974 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.974 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.974 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:21.974 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.233 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.493 00:21:22.493 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.493 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.493 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.752 { 00:21:22.752 "cntlid": 79, 00:21:22.752 "qid": 0, 00:21:22.752 "state": "enabled", 00:21:22.752 "thread": "nvmf_tgt_poll_group_000", 00:21:22.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:22.752 "listen_address": { 00:21:22.752 "trtype": "TCP", 00:21:22.752 "adrfam": "IPv4", 00:21:22.752 "traddr": "10.0.0.2", 00:21:22.752 "trsvcid": "4420" 00:21:22.752 }, 00:21:22.752 "peer_address": { 00:21:22.752 "trtype": "TCP", 00:21:22.752 "adrfam": "IPv4", 00:21:22.752 "traddr": "10.0.0.1", 00:21:22.752 "trsvcid": "58548" 00:21:22.752 }, 00:21:22.752 "auth": { 00:21:22.752 "state": "completed", 00:21:22.752 "digest": "sha384", 00:21:22.752 "dhgroup": "ffdhe4096" 00:21:22.752 } 00:21:22.752 } 00:21:22.752 ]' 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.752 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.011 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:23.012 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:23.581 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.581 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:23.581 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.581 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.581 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.581 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.581 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.581 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.581 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.842 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.101 00:21:24.101 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.101 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.101 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.361 { 00:21:24.361 "cntlid": 81, 00:21:24.361 "qid": 0, 00:21:24.361 "state": "enabled", 00:21:24.361 "thread": "nvmf_tgt_poll_group_000", 00:21:24.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:24.361 "listen_address": { 00:21:24.361 "trtype": "TCP", 00:21:24.361 "adrfam": "IPv4", 00:21:24.361 "traddr": "10.0.0.2", 00:21:24.361 "trsvcid": "4420" 00:21:24.361 }, 00:21:24.361 "peer_address": { 00:21:24.361 "trtype": "TCP", 00:21:24.361 "adrfam": "IPv4", 00:21:24.361 "traddr": "10.0.0.1", 00:21:24.361 "trsvcid": "58580" 00:21:24.361 }, 00:21:24.361 "auth": { 00:21:24.361 "state": "completed", 00:21:24.361 "digest": "sha384", 00:21:24.361 "dhgroup": "ffdhe6144" 00:21:24.361 } 00:21:24.361 } 00:21:24.361 ]' 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.361 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.620 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.620 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.620 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.621 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:24.621 06:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.562 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.822 00:21:25.822 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.822 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.822 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.083 { 00:21:26.083 "cntlid": 83, 00:21:26.083 "qid": 0, 00:21:26.083 "state": "enabled", 00:21:26.083 "thread": "nvmf_tgt_poll_group_000", 00:21:26.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:26.083 "listen_address": { 00:21:26.083 "trtype": "TCP", 00:21:26.083 "adrfam": "IPv4", 00:21:26.083 "traddr": "10.0.0.2", 00:21:26.083 "trsvcid": "4420" 00:21:26.083 }, 00:21:26.083 "peer_address": { 00:21:26.083 "trtype": "TCP", 00:21:26.083 "adrfam": "IPv4", 00:21:26.083 "traddr": "10.0.0.1", 00:21:26.083 "trsvcid": "52642" 00:21:26.083 }, 00:21:26.083 "auth": { 00:21:26.083 "state": "completed", 00:21:26.083 "digest": "sha384", 00:21:26.083 "dhgroup": "ffdhe6144" 00:21:26.083 } 00:21:26.083 } 00:21:26.083 ]' 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.083 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.344 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.344 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.344 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.344 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:26.344 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.282 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.543 00:21:27.543 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.543 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.543 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.803 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.803 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.803 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.803 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.803 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.804 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.804 { 00:21:27.804 "cntlid": 85, 00:21:27.804 "qid": 0, 00:21:27.804 "state": "enabled", 00:21:27.804 "thread": "nvmf_tgt_poll_group_000", 00:21:27.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:27.804 "listen_address": { 00:21:27.804 "trtype": "TCP", 00:21:27.804 "adrfam": "IPv4", 00:21:27.804 "traddr": "10.0.0.2", 00:21:27.804 "trsvcid": "4420" 00:21:27.804 }, 00:21:27.804 "peer_address": { 00:21:27.804 "trtype": "TCP", 00:21:27.804 "adrfam": "IPv4", 00:21:27.804 "traddr": "10.0.0.1", 00:21:27.804 "trsvcid": "52666" 00:21:27.804 }, 00:21:27.804 "auth": { 00:21:27.804 "state": "completed", 00:21:27.804 "digest": "sha384", 00:21:27.804 "dhgroup": "ffdhe6144" 00:21:27.804 } 00:21:27.804 } 00:21:27.804 ]' 00:21:27.804 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.804 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.804 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.064 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:28.064 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.064 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.064 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.065 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.065 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:28.065 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:29.003 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.003 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:29.003 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.003 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.003 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.003 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.003 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.003 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.003 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:29.003 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.003 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.003 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:29.004 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:29.004 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.004 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:29.004 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.004 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.004 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.004 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:29.004 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.004 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.263 00:21:29.264 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.264 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.264 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.524 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.524 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.524 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.524 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.524 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.524 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.524 { 00:21:29.524 "cntlid": 87, 00:21:29.524 "qid": 0, 00:21:29.524 "state": "enabled", 00:21:29.524 "thread": "nvmf_tgt_poll_group_000", 00:21:29.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:29.524 "listen_address": { 00:21:29.524 "trtype": "TCP", 00:21:29.524 "adrfam": "IPv4", 00:21:29.524 "traddr": "10.0.0.2", 00:21:29.524 "trsvcid": "4420" 00:21:29.524 }, 00:21:29.524 "peer_address": { 00:21:29.524 "trtype": "TCP", 00:21:29.524 "adrfam": "IPv4", 00:21:29.524 "traddr": "10.0.0.1", 00:21:29.524 "trsvcid": "52696" 00:21:29.524 }, 00:21:29.524 "auth": { 00:21:29.524 "state": "completed", 00:21:29.524 "digest": "sha384", 00:21:29.524 "dhgroup": "ffdhe6144" 00:21:29.524 } 00:21:29.524 } 00:21:29.524 ]' 00:21:29.524 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.524 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.524 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.786 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.786 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.786 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.786 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.786 06:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.786 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:29.786 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:30.726 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.727 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.727 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.727 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.727 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.727 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.727 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.727 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.297 00:21:31.297 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.297 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.297 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.297 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.298 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.298 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.298 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.298 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.298 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.298 { 00:21:31.298 "cntlid": 89, 00:21:31.298 "qid": 0, 00:21:31.298 "state": "enabled", 00:21:31.298 "thread": "nvmf_tgt_poll_group_000", 00:21:31.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:31.298 "listen_address": { 00:21:31.298 "trtype": "TCP", 00:21:31.298 "adrfam": "IPv4", 00:21:31.298 "traddr": "10.0.0.2", 00:21:31.298 "trsvcid": "4420" 00:21:31.298 }, 00:21:31.298 "peer_address": { 00:21:31.298 "trtype": "TCP", 00:21:31.298 "adrfam": "IPv4", 00:21:31.298 "traddr": "10.0.0.1", 00:21:31.298 "trsvcid": "52726" 00:21:31.298 }, 00:21:31.298 "auth": { 00:21:31.298 "state": "completed", 00:21:31.298 "digest": "sha384", 00:21:31.298 "dhgroup": "ffdhe8192" 00:21:31.298 } 00:21:31.298 } 00:21:31.298 ]' 00:21:31.298 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.559 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.559 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.559 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.559 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.559 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.559 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.559 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.820 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:31.820 06:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:32.391 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.391 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.391 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.391 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.391 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.391 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.391 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.391 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.650 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:32.650 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.650 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:32.650 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.650 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:32.650 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.651 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.651 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.651 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.651 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.651 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.651 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.651 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.911 00:21:32.911 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.911 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.911 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.171 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.171 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.171 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.171 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.171 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.171 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.171 { 00:21:33.171 "cntlid": 91, 00:21:33.171 "qid": 0, 00:21:33.171 "state": "enabled", 00:21:33.171 "thread": "nvmf_tgt_poll_group_000", 00:21:33.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:33.171 "listen_address": { 00:21:33.171 "trtype": "TCP", 00:21:33.171 "adrfam": "IPv4", 00:21:33.171 "traddr": "10.0.0.2", 00:21:33.171 "trsvcid": "4420" 00:21:33.171 }, 00:21:33.171 "peer_address": { 00:21:33.171 "trtype": "TCP", 00:21:33.171 "adrfam": "IPv4", 00:21:33.171 "traddr": "10.0.0.1", 00:21:33.171 "trsvcid": "52752" 00:21:33.171 }, 00:21:33.171 "auth": { 00:21:33.171 "state": "completed", 00:21:33.171 "digest": "sha384", 00:21:33.171 "dhgroup": "ffdhe8192" 00:21:33.171 } 00:21:33.171 } 00:21:33.171 ]' 00:21:33.171 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.171 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.171 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.432 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.432 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.432 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.432 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.432 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.432 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:33.432 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.376 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.945 00:21:34.945 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.945 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.945 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.945 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.945 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.945 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.945 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.945 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.945 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.945 { 00:21:34.945 "cntlid": 93, 00:21:34.945 "qid": 0, 00:21:34.945 "state": "enabled", 00:21:34.945 "thread": "nvmf_tgt_poll_group_000", 00:21:34.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:34.945 "listen_address": { 00:21:34.945 "trtype": "TCP", 00:21:34.945 "adrfam": "IPv4", 00:21:34.945 "traddr": "10.0.0.2", 00:21:34.945 "trsvcid": "4420" 00:21:34.945 }, 00:21:34.945 "peer_address": { 00:21:34.945 "trtype": "TCP", 00:21:34.945 "adrfam": "IPv4", 00:21:34.945 "traddr": "10.0.0.1", 00:21:34.945 "trsvcid": "53934" 00:21:34.945 }, 00:21:34.945 "auth": { 00:21:34.945 "state": "completed", 00:21:34.945 "digest": "sha384", 00:21:34.945 "dhgroup": "ffdhe8192" 00:21:34.945 } 00:21:34.945 } 00:21:34.945 ]' 00:21:34.945 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.205 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.205 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.205 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.205 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.205 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.205 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.205 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.466 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:35.466 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:36.037 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.037 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:36.037 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.037 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.037 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.037 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.037 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.037 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.297 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.866 00:21:36.866 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.866 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.866 06:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.866 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.866 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.866 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.866 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.866 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.866 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.866 { 00:21:36.866 "cntlid": 95, 00:21:36.866 "qid": 0, 00:21:36.866 "state": "enabled", 00:21:36.866 "thread": "nvmf_tgt_poll_group_000", 00:21:36.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:36.866 "listen_address": { 00:21:36.866 "trtype": "TCP", 00:21:36.866 "adrfam": "IPv4", 00:21:36.866 "traddr": "10.0.0.2", 00:21:36.866 "trsvcid": "4420" 00:21:36.866 }, 00:21:36.866 "peer_address": { 00:21:36.866 "trtype": "TCP", 00:21:36.866 "adrfam": "IPv4", 00:21:36.866 "traddr": "10.0.0.1", 00:21:36.866 "trsvcid": "53962" 00:21:36.866 }, 00:21:36.866 "auth": { 00:21:36.866 "state": "completed", 00:21:36.866 "digest": "sha384", 00:21:36.866 "dhgroup": "ffdhe8192" 00:21:36.866 } 00:21:36.866 } 00:21:36.866 ]' 00:21:36.866 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.866 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.866 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.127 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.127 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.127 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.127 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.127 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.127 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:37.127 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:38.068 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.068 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.328 00:21:38.328 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.328 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.328 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.588 { 00:21:38.588 "cntlid": 97, 00:21:38.588 "qid": 0, 00:21:38.588 "state": "enabled", 00:21:38.588 "thread": "nvmf_tgt_poll_group_000", 00:21:38.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:38.588 "listen_address": { 00:21:38.588 "trtype": "TCP", 00:21:38.588 "adrfam": "IPv4", 00:21:38.588 "traddr": "10.0.0.2", 00:21:38.588 "trsvcid": "4420" 00:21:38.588 }, 00:21:38.588 "peer_address": { 00:21:38.588 "trtype": "TCP", 00:21:38.588 "adrfam": "IPv4", 00:21:38.588 "traddr": "10.0.0.1", 00:21:38.588 "trsvcid": "53982" 00:21:38.588 }, 00:21:38.588 "auth": { 00:21:38.588 "state": "completed", 00:21:38.588 "digest": "sha512", 00:21:38.588 "dhgroup": "null" 00:21:38.588 } 00:21:38.588 } 00:21:38.588 ]' 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.588 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.849 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:38.849 06:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:39.419 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.419 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.419 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.419 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.419 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.419 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.419 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.419 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.679 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.941 00:21:39.941 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.941 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.941 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.202 { 00:21:40.202 "cntlid": 99, 00:21:40.202 "qid": 0, 00:21:40.202 "state": "enabled", 00:21:40.202 "thread": "nvmf_tgt_poll_group_000", 00:21:40.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:40.202 "listen_address": { 00:21:40.202 "trtype": "TCP", 00:21:40.202 "adrfam": "IPv4", 00:21:40.202 "traddr": "10.0.0.2", 00:21:40.202 "trsvcid": "4420" 00:21:40.202 }, 00:21:40.202 "peer_address": { 00:21:40.202 "trtype": "TCP", 00:21:40.202 "adrfam": "IPv4", 00:21:40.202 "traddr": "10.0.0.1", 00:21:40.202 "trsvcid": "54016" 00:21:40.202 }, 00:21:40.202 "auth": { 00:21:40.202 "state": "completed", 00:21:40.202 "digest": "sha512", 00:21:40.202 "dhgroup": "null" 00:21:40.202 } 00:21:40.202 } 00:21:40.202 ]' 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.202 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.462 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:40.463 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:41.031 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.031 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.031 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.031 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.031 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.031 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.031 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.031 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.291 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.551 00:21:41.551 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.551 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.551 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.811 { 00:21:41.811 "cntlid": 101, 00:21:41.811 "qid": 0, 00:21:41.811 "state": "enabled", 00:21:41.811 "thread": "nvmf_tgt_poll_group_000", 00:21:41.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:41.811 "listen_address": { 00:21:41.811 "trtype": "TCP", 00:21:41.811 "adrfam": "IPv4", 00:21:41.811 "traddr": "10.0.0.2", 00:21:41.811 "trsvcid": "4420" 00:21:41.811 }, 00:21:41.811 "peer_address": { 00:21:41.811 "trtype": "TCP", 00:21:41.811 "adrfam": "IPv4", 00:21:41.811 "traddr": "10.0.0.1", 00:21:41.811 "trsvcid": "54036" 00:21:41.811 }, 00:21:41.811 "auth": { 00:21:41.811 "state": "completed", 00:21:41.811 "digest": "sha512", 00:21:41.811 "dhgroup": "null" 00:21:41.811 } 00:21:41.811 } 00:21:41.811 ]' 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:41.811 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.811 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.811 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.811 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.071 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:42.071 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:42.642 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.642 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.642 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.642 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.642 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.642 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.642 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.642 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.901 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.160 00:21:43.160 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.160 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.160 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.420 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.421 { 00:21:43.421 "cntlid": 103, 00:21:43.421 "qid": 0, 00:21:43.421 "state": "enabled", 00:21:43.421 "thread": "nvmf_tgt_poll_group_000", 00:21:43.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:43.421 "listen_address": { 00:21:43.421 "trtype": "TCP", 00:21:43.421 "adrfam": "IPv4", 00:21:43.421 "traddr": "10.0.0.2", 00:21:43.421 "trsvcid": "4420" 00:21:43.421 }, 00:21:43.421 "peer_address": { 00:21:43.421 "trtype": "TCP", 00:21:43.421 "adrfam": "IPv4", 00:21:43.421 "traddr": "10.0.0.1", 00:21:43.421 "trsvcid": "54068" 00:21:43.421 }, 00:21:43.421 "auth": { 00:21:43.421 "state": "completed", 00:21:43.421 "digest": "sha512", 00:21:43.421 "dhgroup": "null" 00:21:43.421 } 00:21:43.421 } 00:21:43.421 ]' 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.421 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.681 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:43.681 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:44.252 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.252 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:44.252 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.252 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.252 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.252 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.252 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.252 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.252 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.512 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.771 00:21:44.771 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.771 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.771 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.030 { 00:21:45.030 "cntlid": 105, 00:21:45.030 "qid": 0, 00:21:45.030 "state": "enabled", 00:21:45.030 "thread": "nvmf_tgt_poll_group_000", 00:21:45.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:45.030 "listen_address": { 00:21:45.030 "trtype": "TCP", 00:21:45.030 "adrfam": "IPv4", 00:21:45.030 "traddr": "10.0.0.2", 00:21:45.030 "trsvcid": "4420" 00:21:45.030 }, 00:21:45.030 "peer_address": { 00:21:45.030 "trtype": "TCP", 00:21:45.030 "adrfam": "IPv4", 00:21:45.030 "traddr": "10.0.0.1", 00:21:45.030 "trsvcid": "48052" 00:21:45.030 }, 00:21:45.030 "auth": { 00:21:45.030 "state": "completed", 00:21:45.030 "digest": "sha512", 00:21:45.030 "dhgroup": "ffdhe2048" 00:21:45.030 } 00:21:45.030 } 00:21:45.030 ]' 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.030 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.290 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:45.290 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:45.865 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.865 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:45.865 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.865 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.865 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.865 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.865 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.865 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.176 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.475 00:21:46.475 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.475 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.475 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.475 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.475 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.475 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.475 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.758 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.758 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.758 { 00:21:46.758 "cntlid": 107, 00:21:46.758 "qid": 0, 00:21:46.758 "state": "enabled", 00:21:46.758 "thread": "nvmf_tgt_poll_group_000", 00:21:46.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:46.758 "listen_address": { 00:21:46.758 "trtype": "TCP", 00:21:46.758 "adrfam": "IPv4", 00:21:46.758 "traddr": "10.0.0.2", 00:21:46.758 "trsvcid": "4420" 00:21:46.758 }, 00:21:46.758 "peer_address": { 00:21:46.758 "trtype": "TCP", 00:21:46.758 "adrfam": "IPv4", 00:21:46.758 "traddr": "10.0.0.1", 00:21:46.758 "trsvcid": "48064" 00:21:46.758 }, 00:21:46.758 "auth": { 00:21:46.758 "state": "completed", 00:21:46.758 "digest": "sha512", 00:21:46.758 "dhgroup": "ffdhe2048" 00:21:46.758 } 00:21:46.758 } 00:21:46.758 ]' 00:21:46.758 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.758 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.758 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.758 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.758 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.759 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.759 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.759 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.019 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:47.019 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:47.588 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.588 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:47.588 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.588 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.588 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.588 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.588 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.588 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.848 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.110 00:21:48.110 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.110 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.110 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.110 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.110 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.110 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.110 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.110 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.110 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.110 { 00:21:48.110 "cntlid": 109, 00:21:48.110 "qid": 0, 00:21:48.110 "state": "enabled", 00:21:48.110 "thread": "nvmf_tgt_poll_group_000", 00:21:48.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:48.110 "listen_address": { 00:21:48.110 "trtype": "TCP", 00:21:48.110 "adrfam": "IPv4", 00:21:48.110 "traddr": "10.0.0.2", 00:21:48.110 "trsvcid": "4420" 00:21:48.110 }, 00:21:48.110 "peer_address": { 00:21:48.110 "trtype": "TCP", 00:21:48.110 "adrfam": "IPv4", 00:21:48.110 "traddr": "10.0.0.1", 00:21:48.110 "trsvcid": "48104" 00:21:48.110 }, 00:21:48.110 "auth": { 00:21:48.110 "state": "completed", 00:21:48.110 "digest": "sha512", 00:21:48.110 "dhgroup": "ffdhe2048" 00:21:48.110 } 00:21:48.110 } 00:21:48.110 ]' 00:21:48.110 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.370 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.370 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.370 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.370 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.370 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.370 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.370 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.630 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:48.630 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:49.199 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.199 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.199 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.199 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.199 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.199 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.199 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.199 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.460 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.720 00:21:49.720 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.720 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.720 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.720 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.720 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.720 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.720 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.720 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.720 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.720 { 00:21:49.720 "cntlid": 111, 00:21:49.720 "qid": 0, 00:21:49.720 "state": "enabled", 00:21:49.720 "thread": "nvmf_tgt_poll_group_000", 00:21:49.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:49.720 "listen_address": { 00:21:49.720 "trtype": "TCP", 00:21:49.720 "adrfam": "IPv4", 00:21:49.720 "traddr": "10.0.0.2", 00:21:49.720 "trsvcid": "4420" 00:21:49.720 }, 00:21:49.720 "peer_address": { 00:21:49.720 "trtype": "TCP", 00:21:49.720 "adrfam": "IPv4", 00:21:49.720 "traddr": "10.0.0.1", 00:21:49.720 "trsvcid": "48128" 00:21:49.720 }, 00:21:49.720 "auth": { 00:21:49.720 "state": "completed", 00:21:49.720 "digest": "sha512", 00:21:49.720 "dhgroup": "ffdhe2048" 00:21:49.720 } 00:21:49.720 } 00:21:49.720 ]' 00:21:49.720 06:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.981 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.981 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.981 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:49.981 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.981 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.981 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.981 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.242 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:50.242 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:50.814 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.814 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.814 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.814 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.814 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.814 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.814 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.814 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.814 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.074 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.074 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.334 { 00:21:51.334 "cntlid": 113, 00:21:51.334 "qid": 0, 00:21:51.334 "state": "enabled", 00:21:51.334 "thread": "nvmf_tgt_poll_group_000", 00:21:51.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:51.334 "listen_address": { 00:21:51.334 "trtype": "TCP", 00:21:51.334 "adrfam": "IPv4", 00:21:51.334 "traddr": "10.0.0.2", 00:21:51.334 "trsvcid": "4420" 00:21:51.334 }, 00:21:51.334 "peer_address": { 00:21:51.334 "trtype": "TCP", 00:21:51.334 "adrfam": "IPv4", 00:21:51.334 "traddr": "10.0.0.1", 00:21:51.334 "trsvcid": "48148" 00:21:51.334 }, 00:21:51.334 "auth": { 00:21:51.334 "state": "completed", 00:21:51.334 "digest": "sha512", 00:21:51.334 "dhgroup": "ffdhe3072" 00:21:51.334 } 00:21:51.334 } 00:21:51.334 ]' 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.334 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.595 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.595 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.595 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.595 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.595 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.855 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:51.855 06:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:52.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.426 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.685 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.685 00:21:52.944 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.944 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.944 06:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.944 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.944 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.944 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.944 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.944 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.944 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.944 { 00:21:52.944 "cntlid": 115, 00:21:52.944 "qid": 0, 00:21:52.944 "state": "enabled", 00:21:52.944 "thread": "nvmf_tgt_poll_group_000", 00:21:52.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:52.944 "listen_address": { 00:21:52.944 "trtype": "TCP", 00:21:52.944 "adrfam": "IPv4", 00:21:52.944 "traddr": "10.0.0.2", 00:21:52.944 "trsvcid": "4420" 00:21:52.944 }, 00:21:52.944 "peer_address": { 00:21:52.944 "trtype": "TCP", 00:21:52.944 "adrfam": "IPv4", 00:21:52.944 "traddr": "10.0.0.1", 00:21:52.944 "trsvcid": "48184" 00:21:52.944 }, 00:21:52.944 "auth": { 00:21:52.944 "state": "completed", 00:21:52.944 "digest": "sha512", 00:21:52.944 "dhgroup": "ffdhe3072" 00:21:52.944 } 00:21:52.944 } 00:21:52.944 ]' 00:21:52.944 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.944 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.944 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.204 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:53.204 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.204 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.204 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.204 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.204 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:53.204 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:54.145 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.146 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.146 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.146 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.146 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.146 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.146 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.146 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.406 00:21:54.406 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.406 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.406 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.666 { 00:21:54.666 "cntlid": 117, 00:21:54.666 "qid": 0, 00:21:54.666 "state": "enabled", 00:21:54.666 "thread": "nvmf_tgt_poll_group_000", 00:21:54.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:54.666 "listen_address": { 00:21:54.666 "trtype": "TCP", 00:21:54.666 "adrfam": "IPv4", 00:21:54.666 "traddr": "10.0.0.2", 00:21:54.666 "trsvcid": "4420" 00:21:54.666 }, 00:21:54.666 "peer_address": { 00:21:54.666 "trtype": "TCP", 00:21:54.666 "adrfam": "IPv4", 00:21:54.666 "traddr": "10.0.0.1", 00:21:54.666 "trsvcid": "48204" 00:21:54.666 }, 00:21:54.666 "auth": { 00:21:54.666 "state": "completed", 00:21:54.666 "digest": "sha512", 00:21:54.666 "dhgroup": "ffdhe3072" 00:21:54.666 } 00:21:54.666 } 00:21:54.666 ]' 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.666 06:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.927 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:54.927 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:21:55.496 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.496 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:55.496 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.496 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.496 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.496 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.496 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.496 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.757 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.018 00:21:56.018 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.018 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.018 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.279 { 00:21:56.279 "cntlid": 119, 00:21:56.279 "qid": 0, 00:21:56.279 "state": "enabled", 00:21:56.279 "thread": "nvmf_tgt_poll_group_000", 00:21:56.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:56.279 "listen_address": { 00:21:56.279 "trtype": "TCP", 00:21:56.279 "adrfam": "IPv4", 00:21:56.279 "traddr": "10.0.0.2", 00:21:56.279 "trsvcid": "4420" 00:21:56.279 }, 00:21:56.279 "peer_address": { 00:21:56.279 "trtype": "TCP", 00:21:56.279 "adrfam": "IPv4", 00:21:56.279 "traddr": "10.0.0.1", 00:21:56.279 "trsvcid": "37174" 00:21:56.279 }, 00:21:56.279 "auth": { 00:21:56.279 "state": "completed", 00:21:56.279 "digest": "sha512", 00:21:56.279 "dhgroup": "ffdhe3072" 00:21:56.279 } 00:21:56.279 } 00:21:56.279 ]' 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.279 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.540 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:56.540 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:21:57.108 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.108 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.108 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.108 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.108 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.108 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.108 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.108 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.108 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.367 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.627 00:21:57.627 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.627 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.627 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.886 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.886 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.886 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.886 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.886 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.886 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.886 { 00:21:57.886 "cntlid": 121, 00:21:57.886 "qid": 0, 00:21:57.886 "state": "enabled", 00:21:57.886 "thread": "nvmf_tgt_poll_group_000", 00:21:57.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:57.887 "listen_address": { 00:21:57.887 "trtype": "TCP", 00:21:57.887 "adrfam": "IPv4", 00:21:57.887 "traddr": "10.0.0.2", 00:21:57.887 "trsvcid": "4420" 00:21:57.887 }, 00:21:57.887 "peer_address": { 00:21:57.887 "trtype": "TCP", 00:21:57.887 "adrfam": "IPv4", 00:21:57.887 "traddr": "10.0.0.1", 00:21:57.887 "trsvcid": "37208" 00:21:57.887 }, 00:21:57.887 "auth": { 00:21:57.887 "state": "completed", 00:21:57.887 "digest": "sha512", 00:21:57.887 "dhgroup": "ffdhe4096" 00:21:57.887 } 00:21:57.887 } 00:21:57.887 ]' 00:21:57.887 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.887 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.887 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.887 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:57.887 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.887 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.887 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.887 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.146 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:58.146 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:21:58.715 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.715 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:58.715 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.715 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.715 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.715 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.715 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.716 06:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.975 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.233 00:21:59.233 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.233 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.233 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.492 { 00:21:59.492 "cntlid": 123, 00:21:59.492 "qid": 0, 00:21:59.492 "state": "enabled", 00:21:59.492 "thread": "nvmf_tgt_poll_group_000", 00:21:59.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:59.492 "listen_address": { 00:21:59.492 "trtype": "TCP", 00:21:59.492 "adrfam": "IPv4", 00:21:59.492 "traddr": "10.0.0.2", 00:21:59.492 "trsvcid": "4420" 00:21:59.492 }, 00:21:59.492 "peer_address": { 00:21:59.492 "trtype": "TCP", 00:21:59.492 "adrfam": "IPv4", 00:21:59.492 "traddr": "10.0.0.1", 00:21:59.492 "trsvcid": "37224" 00:21:59.492 }, 00:21:59.492 "auth": { 00:21:59.492 "state": "completed", 00:21:59.492 "digest": "sha512", 00:21:59.492 "dhgroup": "ffdhe4096" 00:21:59.492 } 00:21:59.492 } 00:21:59.492 ]' 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.492 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.751 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.751 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.752 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.752 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:21:59.752 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:22:00.322 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.581 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.581 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.582 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.842 00:22:00.842 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.842 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.842 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.101 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.101 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.101 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.101 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.101 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.101 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.101 { 00:22:01.101 "cntlid": 125, 00:22:01.101 "qid": 0, 00:22:01.101 "state": "enabled", 00:22:01.101 "thread": "nvmf_tgt_poll_group_000", 00:22:01.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:01.101 "listen_address": { 00:22:01.101 "trtype": "TCP", 00:22:01.101 "adrfam": "IPv4", 00:22:01.101 "traddr": "10.0.0.2", 00:22:01.101 "trsvcid": "4420" 00:22:01.101 }, 00:22:01.101 "peer_address": { 00:22:01.101 "trtype": "TCP", 00:22:01.101 "adrfam": "IPv4", 00:22:01.101 "traddr": "10.0.0.1", 00:22:01.101 "trsvcid": "37252" 00:22:01.101 }, 00:22:01.101 "auth": { 00:22:01.101 "state": "completed", 00:22:01.101 "digest": "sha512", 00:22:01.101 "dhgroup": "ffdhe4096" 00:22:01.101 } 00:22:01.101 } 00:22:01.101 ]' 00:22:01.101 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.101 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.101 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.359 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.359 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.359 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.359 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.359 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.359 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:22:01.359 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.296 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.556 00:22:02.556 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.556 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.556 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.816 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.816 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.816 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.816 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.816 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.816 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.816 { 00:22:02.816 "cntlid": 127, 00:22:02.816 "qid": 0, 00:22:02.816 "state": "enabled", 00:22:02.816 "thread": "nvmf_tgt_poll_group_000", 00:22:02.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:02.816 "listen_address": { 00:22:02.816 "trtype": "TCP", 00:22:02.816 "adrfam": "IPv4", 00:22:02.816 "traddr": "10.0.0.2", 00:22:02.816 "trsvcid": "4420" 00:22:02.816 }, 00:22:02.816 "peer_address": { 00:22:02.816 "trtype": "TCP", 00:22:02.816 "adrfam": "IPv4", 00:22:02.816 "traddr": "10.0.0.1", 00:22:02.816 "trsvcid": "37280" 00:22:02.816 }, 00:22:02.816 "auth": { 00:22:02.816 "state": "completed", 00:22:02.816 "digest": "sha512", 00:22:02.816 "dhgroup": "ffdhe4096" 00:22:02.816 } 00:22:02.816 } 00:22:02.816 ]' 00:22:02.816 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.816 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.816 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.816 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:02.816 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.816 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.816 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.816 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.076 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:22:03.076 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:22:03.644 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.903 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:03.903 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.903 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.903 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.903 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.903 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.903 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.904 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.904 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.164 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.425 { 00:22:04.425 "cntlid": 129, 00:22:04.425 "qid": 0, 00:22:04.425 "state": "enabled", 00:22:04.425 "thread": "nvmf_tgt_poll_group_000", 00:22:04.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:04.425 "listen_address": { 00:22:04.425 "trtype": "TCP", 00:22:04.425 "adrfam": "IPv4", 00:22:04.425 "traddr": "10.0.0.2", 00:22:04.425 "trsvcid": "4420" 00:22:04.425 }, 00:22:04.425 "peer_address": { 00:22:04.425 "trtype": "TCP", 00:22:04.425 "adrfam": "IPv4", 00:22:04.425 "traddr": "10.0.0.1", 00:22:04.425 "trsvcid": "37310" 00:22:04.425 }, 00:22:04.425 "auth": { 00:22:04.425 "state": "completed", 00:22:04.425 "digest": "sha512", 00:22:04.425 "dhgroup": "ffdhe6144" 00:22:04.425 } 00:22:04.425 } 00:22:04.425 ]' 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.425 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.685 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.685 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.685 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.685 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.685 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.685 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:22:04.685 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:22:05.624 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.624 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.624 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.624 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.624 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.624 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.624 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.624 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.624 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.625 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.885 00:22:05.885 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.885 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.885 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.145 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.145 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.145 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.145 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.145 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.145 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.145 { 00:22:06.145 "cntlid": 131, 00:22:06.145 "qid": 0, 00:22:06.145 "state": "enabled", 00:22:06.145 "thread": "nvmf_tgt_poll_group_000", 00:22:06.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:06.145 "listen_address": { 00:22:06.145 "trtype": "TCP", 00:22:06.145 "adrfam": "IPv4", 00:22:06.145 "traddr": "10.0.0.2", 00:22:06.145 "trsvcid": "4420" 00:22:06.145 }, 00:22:06.145 "peer_address": { 00:22:06.145 "trtype": "TCP", 00:22:06.145 "adrfam": "IPv4", 00:22:06.145 "traddr": "10.0.0.1", 00:22:06.145 "trsvcid": "51406" 00:22:06.145 }, 00:22:06.145 "auth": { 00:22:06.145 "state": "completed", 00:22:06.145 "digest": "sha512", 00:22:06.145 "dhgroup": "ffdhe6144" 00:22:06.145 } 00:22:06.145 } 00:22:06.145 ]' 00:22:06.145 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.145 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.145 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.145 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:06.404 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.404 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.405 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.405 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.405 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:22:06.405 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.345 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.606 00:22:07.606 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.606 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.606 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.866 { 00:22:07.866 "cntlid": 133, 00:22:07.866 "qid": 0, 00:22:07.866 "state": "enabled", 00:22:07.866 "thread": "nvmf_tgt_poll_group_000", 00:22:07.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:07.866 "listen_address": { 00:22:07.866 "trtype": "TCP", 00:22:07.866 "adrfam": "IPv4", 00:22:07.866 "traddr": "10.0.0.2", 00:22:07.866 "trsvcid": "4420" 00:22:07.866 }, 00:22:07.866 "peer_address": { 00:22:07.866 "trtype": "TCP", 00:22:07.866 "adrfam": "IPv4", 00:22:07.866 "traddr": "10.0.0.1", 00:22:07.866 "trsvcid": "51422" 00:22:07.866 }, 00:22:07.866 "auth": { 00:22:07.866 "state": "completed", 00:22:07.866 "digest": "sha512", 00:22:07.866 "dhgroup": "ffdhe6144" 00:22:07.866 } 00:22:07.866 } 00:22:07.866 ]' 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.866 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.125 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.125 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.125 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.125 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:22:08.125 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:22:09.065 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.065 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.325 00:22:09.325 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.325 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.325 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.585 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.585 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.585 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.586 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.586 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.586 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.586 { 00:22:09.586 "cntlid": 135, 00:22:09.586 "qid": 0, 00:22:09.586 "state": "enabled", 00:22:09.586 "thread": "nvmf_tgt_poll_group_000", 00:22:09.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:09.586 "listen_address": { 00:22:09.586 "trtype": "TCP", 00:22:09.586 "adrfam": "IPv4", 00:22:09.586 "traddr": "10.0.0.2", 00:22:09.586 "trsvcid": "4420" 00:22:09.586 }, 00:22:09.586 "peer_address": { 00:22:09.586 "trtype": "TCP", 00:22:09.586 "adrfam": "IPv4", 00:22:09.586 "traddr": "10.0.0.1", 00:22:09.586 "trsvcid": "51448" 00:22:09.586 }, 00:22:09.586 "auth": { 00:22:09.586 "state": "completed", 00:22:09.586 "digest": "sha512", 00:22:09.586 "dhgroup": "ffdhe6144" 00:22:09.586 } 00:22:09.586 } 00:22:09.586 ]' 00:22:09.586 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.586 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.586 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.586 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.586 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.846 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.846 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.846 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.846 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:22:09.846 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.786 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.355 00:22:11.355 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.355 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.355 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.355 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.355 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.355 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.355 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.355 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.355 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.355 { 00:22:11.355 "cntlid": 137, 00:22:11.355 "qid": 0, 00:22:11.355 "state": "enabled", 00:22:11.355 "thread": "nvmf_tgt_poll_group_000", 00:22:11.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:11.355 "listen_address": { 00:22:11.355 "trtype": "TCP", 00:22:11.355 "adrfam": "IPv4", 00:22:11.355 "traddr": "10.0.0.2", 00:22:11.355 "trsvcid": "4420" 00:22:11.355 }, 00:22:11.355 "peer_address": { 00:22:11.355 "trtype": "TCP", 00:22:11.355 "adrfam": "IPv4", 00:22:11.355 "traddr": "10.0.0.1", 00:22:11.355 "trsvcid": "51472" 00:22:11.355 }, 00:22:11.355 "auth": { 00:22:11.355 "state": "completed", 00:22:11.355 "digest": "sha512", 00:22:11.355 "dhgroup": "ffdhe8192" 00:22:11.355 } 00:22:11.355 } 00:22:11.355 ]' 00:22:11.355 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.614 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.614 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.614 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.614 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.614 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.614 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.614 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.873 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:22:11.873 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:22:12.442 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.442 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:12.442 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.442 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.442 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.442 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.442 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.442 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.701 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:12.701 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.701 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.701 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:12.701 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:12.702 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.702 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.702 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.702 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.702 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.702 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.702 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.702 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.277 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.277 { 00:22:13.277 "cntlid": 139, 00:22:13.277 "qid": 0, 00:22:13.277 "state": "enabled", 00:22:13.277 "thread": "nvmf_tgt_poll_group_000", 00:22:13.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:13.277 "listen_address": { 00:22:13.277 "trtype": "TCP", 00:22:13.277 "adrfam": "IPv4", 00:22:13.277 "traddr": "10.0.0.2", 00:22:13.277 "trsvcid": "4420" 00:22:13.277 }, 00:22:13.277 "peer_address": { 00:22:13.277 "trtype": "TCP", 00:22:13.277 "adrfam": "IPv4", 00:22:13.277 "traddr": "10.0.0.1", 00:22:13.277 "trsvcid": "51506" 00:22:13.277 }, 00:22:13.277 "auth": { 00:22:13.277 "state": "completed", 00:22:13.277 "digest": "sha512", 00:22:13.277 "dhgroup": "ffdhe8192" 00:22:13.277 } 00:22:13.277 } 00:22:13.277 ]' 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.277 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.536 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.536 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.536 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.536 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.536 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.536 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:22:13.536 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: --dhchap-ctrl-secret DHHC-1:02:NGQ4NWFkODAzZTM2NDE3YWE4MmU3ZTQ4YjQ3MjVmZWFjMjM4ZjJlNGE4MTNlMGQwGaDURg==: 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.470 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.038 00:22:15.038 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.038 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.038 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.038 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.038 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.038 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.038 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.038 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.296 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.296 { 00:22:15.296 "cntlid": 141, 00:22:15.296 "qid": 0, 00:22:15.296 "state": "enabled", 00:22:15.296 "thread": "nvmf_tgt_poll_group_000", 00:22:15.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:15.296 "listen_address": { 00:22:15.296 "trtype": "TCP", 00:22:15.296 "adrfam": "IPv4", 00:22:15.296 "traddr": "10.0.0.2", 00:22:15.296 "trsvcid": "4420" 00:22:15.296 }, 00:22:15.296 "peer_address": { 00:22:15.296 "trtype": "TCP", 00:22:15.296 "adrfam": "IPv4", 00:22:15.296 "traddr": "10.0.0.1", 00:22:15.296 "trsvcid": "33402" 00:22:15.296 }, 00:22:15.296 "auth": { 00:22:15.297 "state": "completed", 00:22:15.297 "digest": "sha512", 00:22:15.297 "dhgroup": "ffdhe8192" 00:22:15.297 } 00:22:15.297 } 00:22:15.297 ]' 00:22:15.297 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.297 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.297 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.297 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.297 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.297 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.297 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.297 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.555 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:22:15.555 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:01:OWUwMmQ4MzgyODI2NWRiNTg1NTgyYTMzODBiMzQ2MjJsYEav: 00:22:16.121 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.121 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.121 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.121 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.121 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.121 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.121 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.121 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.378 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:16.378 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.378 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.378 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:16.378 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:16.378 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.379 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:16.379 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.379 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.379 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.379 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.379 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.379 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.946 00:22:16.946 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.946 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.946 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.946 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.946 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.946 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.946 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.946 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.946 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.946 { 00:22:16.946 "cntlid": 143, 00:22:16.946 "qid": 0, 00:22:16.946 "state": "enabled", 00:22:16.946 "thread": "nvmf_tgt_poll_group_000", 00:22:16.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:16.946 "listen_address": { 00:22:16.946 "trtype": "TCP", 00:22:16.946 "adrfam": "IPv4", 00:22:16.946 "traddr": "10.0.0.2", 00:22:16.946 "trsvcid": "4420" 00:22:16.946 }, 00:22:16.946 "peer_address": { 00:22:16.947 "trtype": "TCP", 00:22:16.947 "adrfam": "IPv4", 00:22:16.947 "traddr": "10.0.0.1", 00:22:16.947 "trsvcid": "33428" 00:22:16.947 }, 00:22:16.947 "auth": { 00:22:16.947 "state": "completed", 00:22:16.947 "digest": "sha512", 00:22:16.947 "dhgroup": "ffdhe8192" 00:22:16.947 } 00:22:16.947 } 00:22:16.947 ]' 00:22:16.947 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.947 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.947 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.206 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.206 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.206 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.206 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.206 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.206 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:22:17.206 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:22:18.141 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.142 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.711 00:22:18.711 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.711 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.711 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.971 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.971 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.971 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.971 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.971 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.972 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.972 { 00:22:18.972 "cntlid": 145, 00:22:18.972 "qid": 0, 00:22:18.972 "state": "enabled", 00:22:18.972 "thread": "nvmf_tgt_poll_group_000", 00:22:18.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:18.972 "listen_address": { 00:22:18.972 "trtype": "TCP", 00:22:18.972 "adrfam": "IPv4", 00:22:18.972 "traddr": "10.0.0.2", 00:22:18.972 "trsvcid": "4420" 00:22:18.972 }, 00:22:18.972 "peer_address": { 00:22:18.972 "trtype": "TCP", 00:22:18.972 "adrfam": "IPv4", 00:22:18.972 "traddr": "10.0.0.1", 00:22:18.972 "trsvcid": "33452" 00:22:18.972 }, 00:22:18.972 "auth": { 00:22:18.972 "state": "completed", 00:22:18.972 "digest": "sha512", 00:22:18.972 "dhgroup": "ffdhe8192" 00:22:18.972 } 00:22:18.972 } 00:22:18.972 ]' 00:22:18.972 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.972 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.972 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.972 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.972 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.972 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.972 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.972 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.231 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:22:19.231 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxYTg4OGJlZDAwOTcyM2IyNmU2MDE2YmEyYWVkMDdlZDEyNzg4ODU1MDcxNmZitRPKZw==: --dhchap-ctrl-secret DHHC-1:03:ZGM2MjM5ZWY5Y2Q4NjM3Mzk4YjdkOWE3ZTJhNzMxOWU3MzhkNjM4OWUxZTIzNjI0YjM5Zjg1MDY5NzAxZDMzZgVaHK0=: 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:19.801 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:20.368 request: 00:22:20.368 { 00:22:20.368 "name": "nvme0", 00:22:20.368 "trtype": "tcp", 00:22:20.368 "traddr": "10.0.0.2", 00:22:20.368 "adrfam": "ipv4", 00:22:20.368 "trsvcid": "4420", 00:22:20.368 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:20.368 "prchk_reftag": false, 00:22:20.368 "prchk_guard": false, 00:22:20.368 "hdgst": false, 00:22:20.368 "ddgst": false, 00:22:20.368 "dhchap_key": "key2", 00:22:20.368 "allow_unrecognized_csi": false, 00:22:20.368 "method": "bdev_nvme_attach_controller", 00:22:20.368 "req_id": 1 00:22:20.368 } 00:22:20.368 Got JSON-RPC error response 00:22:20.368 response: 00:22:20.368 { 00:22:20.368 "code": -5, 00:22:20.368 "message": "Input/output error" 00:22:20.368 } 00:22:20.368 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:20.368 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.368 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.368 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.368 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.368 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.369 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.628 request: 00:22:20.628 { 00:22:20.628 "name": "nvme0", 00:22:20.628 "trtype": "tcp", 00:22:20.628 "traddr": "10.0.0.2", 00:22:20.628 "adrfam": "ipv4", 00:22:20.628 "trsvcid": "4420", 00:22:20.628 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:20.628 "prchk_reftag": false, 00:22:20.628 "prchk_guard": false, 00:22:20.628 "hdgst": false, 00:22:20.628 "ddgst": false, 00:22:20.628 "dhchap_key": "key1", 00:22:20.628 "dhchap_ctrlr_key": "ckey2", 00:22:20.628 "allow_unrecognized_csi": false, 00:22:20.628 "method": "bdev_nvme_attach_controller", 00:22:20.628 "req_id": 1 00:22:20.628 } 00:22:20.628 Got JSON-RPC error response 00:22:20.628 response: 00:22:20.628 { 00:22:20.628 "code": -5, 00:22:20.628 "message": "Input/output error" 00:22:20.628 } 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.889 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.149 request: 00:22:21.149 { 00:22:21.149 "name": "nvme0", 00:22:21.149 "trtype": "tcp", 00:22:21.149 "traddr": "10.0.0.2", 00:22:21.149 "adrfam": "ipv4", 00:22:21.149 "trsvcid": "4420", 00:22:21.149 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:21.149 "prchk_reftag": false, 00:22:21.149 "prchk_guard": false, 00:22:21.149 "hdgst": false, 00:22:21.149 "ddgst": false, 00:22:21.149 "dhchap_key": "key1", 00:22:21.149 "dhchap_ctrlr_key": "ckey1", 00:22:21.149 "allow_unrecognized_csi": false, 00:22:21.149 "method": "bdev_nvme_attach_controller", 00:22:21.149 "req_id": 1 00:22:21.149 } 00:22:21.149 Got JSON-RPC error response 00:22:21.149 response: 00:22:21.149 { 00:22:21.149 "code": -5, 00:22:21.149 "message": "Input/output error" 00:22:21.149 } 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2809405 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2809405 ']' 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2809405 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:21.149 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2809405 00:22:21.409 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:21.409 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:21.409 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2809405' 00:22:21.409 killing process with pid 2809405 00:22:21.409 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2809405 00:22:21.409 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2809405 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2835121 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2835121 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2835121 ']' 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:21.410 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.347 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:22.347 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:22.347 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.347 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:22.347 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.348 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.348 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:22.348 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2835121 00:22:22.348 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2835121 ']' 00:22:22.348 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.348 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:22.348 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.348 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:22.348 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 null0 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NsG 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.UzS ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UzS 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.M3K 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.j24 ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.j24 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fsa 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.njj ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.njj 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.e9b 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:22.607 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.548 nvme0n1 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.548 { 00:22:23.548 "cntlid": 1, 00:22:23.548 "qid": 0, 00:22:23.548 "state": "enabled", 00:22:23.548 "thread": "nvmf_tgt_poll_group_000", 00:22:23.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:23.548 "listen_address": { 00:22:23.548 "trtype": "TCP", 00:22:23.548 "adrfam": "IPv4", 00:22:23.548 "traddr": "10.0.0.2", 00:22:23.548 "trsvcid": "4420" 00:22:23.548 }, 00:22:23.548 "peer_address": { 00:22:23.548 "trtype": "TCP", 00:22:23.548 "adrfam": "IPv4", 00:22:23.548 "traddr": "10.0.0.1", 00:22:23.548 "trsvcid": "33488" 00:22:23.548 }, 00:22:23.548 "auth": { 00:22:23.548 "state": "completed", 00:22:23.548 "digest": "sha512", 00:22:23.548 "dhgroup": "ffdhe8192" 00:22:23.548 } 00:22:23.548 } 00:22:23.548 ]' 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.548 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.808 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.808 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.808 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.808 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.808 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.808 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.067 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:22:24.067 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:24.651 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:24.943 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:24.943 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:24.943 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:24.943 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:24.943 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.943 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:24.943 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.943 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:24.943 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.943 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.943 request: 00:22:24.943 { 00:22:24.943 "name": "nvme0", 00:22:24.943 "trtype": "tcp", 00:22:24.943 "traddr": "10.0.0.2", 00:22:24.943 "adrfam": "ipv4", 00:22:24.943 "trsvcid": "4420", 00:22:24.943 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:24.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:24.943 "prchk_reftag": false, 00:22:24.943 "prchk_guard": false, 00:22:24.943 "hdgst": false, 00:22:24.943 "ddgst": false, 00:22:24.943 "dhchap_key": "key3", 00:22:24.943 "allow_unrecognized_csi": false, 00:22:24.943 "method": "bdev_nvme_attach_controller", 00:22:24.943 "req_id": 1 00:22:24.943 } 00:22:24.943 Got JSON-RPC error response 00:22:24.943 response: 00:22:24.943 { 00:22:24.943 "code": -5, 00:22:24.943 "message": "Input/output error" 00:22:24.943 } 00:22:24.943 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:24.943 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.943 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.943 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.943 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:24.943 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:24.943 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:24.943 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.207 request: 00:22:25.207 { 00:22:25.207 "name": "nvme0", 00:22:25.207 "trtype": "tcp", 00:22:25.207 "traddr": "10.0.0.2", 00:22:25.207 "adrfam": "ipv4", 00:22:25.207 "trsvcid": "4420", 00:22:25.207 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:25.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:25.207 "prchk_reftag": false, 00:22:25.207 "prchk_guard": false, 00:22:25.207 "hdgst": false, 00:22:25.207 "ddgst": false, 00:22:25.207 "dhchap_key": "key3", 00:22:25.207 "allow_unrecognized_csi": false, 00:22:25.207 "method": "bdev_nvme_attach_controller", 00:22:25.207 "req_id": 1 00:22:25.207 } 00:22:25.207 Got JSON-RPC error response 00:22:25.207 response: 00:22:25.207 { 00:22:25.207 "code": -5, 00:22:25.207 "message": "Input/output error" 00:22:25.207 } 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:25.207 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.467 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.468 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.727 request: 00:22:25.727 { 00:22:25.727 "name": "nvme0", 00:22:25.727 "trtype": "tcp", 00:22:25.727 "traddr": "10.0.0.2", 00:22:25.727 "adrfam": "ipv4", 00:22:25.727 "trsvcid": "4420", 00:22:25.727 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:25.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:25.727 "prchk_reftag": false, 00:22:25.727 "prchk_guard": false, 00:22:25.727 "hdgst": false, 00:22:25.727 "ddgst": false, 00:22:25.727 "dhchap_key": "key0", 00:22:25.727 "dhchap_ctrlr_key": "key1", 00:22:25.727 "allow_unrecognized_csi": false, 00:22:25.727 "method": "bdev_nvme_attach_controller", 00:22:25.727 "req_id": 1 00:22:25.727 } 00:22:25.727 Got JSON-RPC error response 00:22:25.727 response: 00:22:25.727 { 00:22:25.728 "code": -5, 00:22:25.728 "message": "Input/output error" 00:22:25.728 } 00:22:25.987 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:25.987 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:25.987 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:25.987 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:25.987 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:25.987 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:25.987 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:25.987 nvme0n1 00:22:26.247 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:26.247 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:26.247 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.247 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.247 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.247 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.507 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:26.507 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.507 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.507 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.507 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:26.507 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:26.507 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:27.076 nvme0n1 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.337 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:27.597 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.597 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:22:27.597 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: --dhchap-ctrl-secret DHHC-1:03:YWQyYmRjNTFjMDdkZDU4YWMxZGUxMDU4ODFkMWFkZWIzYjEwYjE5M2JkNDhkMjI2MDgyZjA1N2EzMDZlOGE1MdODjvY=: 00:22:28.166 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:28.166 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:28.166 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:28.166 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:28.166 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:28.166 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:28.166 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:28.166 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.166 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.426 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:28.426 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:28.426 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:28.426 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:28.426 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.426 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:28.426 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.426 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:28.426 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:28.426 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:28.995 request: 00:22:28.995 { 00:22:28.995 "name": "nvme0", 00:22:28.995 "trtype": "tcp", 00:22:28.995 "traddr": "10.0.0.2", 00:22:28.995 "adrfam": "ipv4", 00:22:28.995 "trsvcid": "4420", 00:22:28.995 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:28.995 "prchk_reftag": false, 00:22:28.995 "prchk_guard": false, 00:22:28.995 "hdgst": false, 00:22:28.995 "ddgst": false, 00:22:28.995 "dhchap_key": "key1", 00:22:28.995 "allow_unrecognized_csi": false, 00:22:28.995 "method": "bdev_nvme_attach_controller", 00:22:28.995 "req_id": 1 00:22:28.995 } 00:22:28.995 Got JSON-RPC error response 00:22:28.995 response: 00:22:28.995 { 00:22:28.995 "code": -5, 00:22:28.995 "message": "Input/output error" 00:22:28.995 } 00:22:28.995 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:28.995 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:28.995 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:28.995 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:28.995 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.995 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.995 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.563 nvme0n1 00:22:29.563 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:29.563 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:29.563 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.824 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.824 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.824 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.114 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:30.114 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.114 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.114 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.114 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:30.114 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:30.114 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:30.114 nvme0n1 00:22:30.373 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:30.373 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:30.373 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.373 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.373 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.374 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: '' 2s 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: ]] 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MWU2OThjODIyMDk5ZTgyMzkwMjBjYWQyYWUwNDdiOTToY0dD: 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:30.633 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: 2s 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: ]] 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZjlkOGI0NDRlNzVjMzM1ODFmNDMxNzczMTg1MzFhYTgxZWU3OWI4YjExNWZlZmQ1eCwk/w==: 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:32.542 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:35.083 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:35.653 nvme0n1 00:22:35.653 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:35.653 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.653 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.653 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.653 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:35.653 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:35.913 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:35.913 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:35.913 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.173 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.173 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:36.173 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.173 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.173 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.173 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:36.173 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:36.432 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:36.998 request: 00:22:36.998 { 00:22:36.998 "name": "nvme0", 00:22:36.998 "dhchap_key": "key1", 00:22:36.998 "dhchap_ctrlr_key": "key3", 00:22:36.999 "method": "bdev_nvme_set_keys", 00:22:36.999 "req_id": 1 00:22:36.999 } 00:22:36.999 Got JSON-RPC error response 00:22:36.999 response: 00:22:36.999 { 00:22:36.999 "code": -13, 00:22:36.999 "message": "Permission denied" 00:22:36.999 } 00:22:36.999 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:36.999 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:36.999 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:36.999 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:36.999 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:36.999 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:36.999 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.258 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:37.258 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:38.198 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:38.198 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:38.198 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.456 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:38.456 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.456 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.456 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.456 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.456 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:38.456 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:38.457 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:39.023 nvme0n1 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:39.023 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:39.591 request: 00:22:39.591 { 00:22:39.591 "name": "nvme0", 00:22:39.591 "dhchap_key": "key2", 00:22:39.591 "dhchap_ctrlr_key": "key0", 00:22:39.592 "method": "bdev_nvme_set_keys", 00:22:39.592 "req_id": 1 00:22:39.592 } 00:22:39.592 Got JSON-RPC error response 00:22:39.592 response: 00:22:39.592 { 00:22:39.592 "code": -13, 00:22:39.592 "message": "Permission denied" 00:22:39.592 } 00:22:39.592 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:39.592 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:39.592 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:39.592 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:39.592 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:39.592 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:39.592 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.853 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:39.853 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:40.792 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:40.792 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:40.792 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2809519 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2809519 ']' 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2809519 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2809519 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2809519' 00:22:41.052 killing process with pid 2809519 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2809519 00:22:41.052 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2809519 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.312 rmmod nvme_tcp 00:22:41.312 rmmod nvme_fabrics 00:22:41.312 rmmod nvme_keyring 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2835121 ']' 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2835121 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2835121 ']' 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2835121 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2835121 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2835121' 00:22:41.312 killing process with pid 2835121 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2835121 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2835121 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.312 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.573 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.573 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.573 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.573 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.573 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.481 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.481 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.NsG /tmp/spdk.key-sha256.M3K /tmp/spdk.key-sha384.fsa /tmp/spdk.key-sha512.e9b /tmp/spdk.key-sha512.UzS /tmp/spdk.key-sha384.j24 /tmp/spdk.key-sha256.njj '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:43.481 00:22:43.481 real 2m36.885s 00:22:43.481 user 5m53.229s 00:22:43.481 sys 0m24.594s 00:22:43.481 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:43.481 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.481 ************************************ 00:22:43.481 END TEST nvmf_auth_target 00:22:43.481 ************************************ 00:22:43.481 06:33:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:43.481 06:33:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:43.481 06:33:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:43.481 06:33:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:43.481 06:33:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:43.482 ************************************ 00:22:43.482 START TEST nvmf_bdevio_no_huge 00:22:43.482 ************************************ 00:22:43.482 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:43.743 * Looking for test storage... 00:22:43.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.743 --rc genhtml_branch_coverage=1 00:22:43.743 --rc genhtml_function_coverage=1 00:22:43.743 --rc genhtml_legend=1 00:22:43.743 --rc geninfo_all_blocks=1 00:22:43.743 --rc geninfo_unexecuted_blocks=1 00:22:43.743 00:22:43.743 ' 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.743 --rc genhtml_branch_coverage=1 00:22:43.743 --rc genhtml_function_coverage=1 00:22:43.743 --rc genhtml_legend=1 00:22:43.743 --rc geninfo_all_blocks=1 00:22:43.743 --rc geninfo_unexecuted_blocks=1 00:22:43.743 00:22:43.743 ' 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.743 --rc genhtml_branch_coverage=1 00:22:43.743 --rc genhtml_function_coverage=1 00:22:43.743 --rc genhtml_legend=1 00:22:43.743 --rc geninfo_all_blocks=1 00:22:43.743 --rc geninfo_unexecuted_blocks=1 00:22:43.743 00:22:43.743 ' 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.743 --rc genhtml_branch_coverage=1 00:22:43.743 --rc genhtml_function_coverage=1 00:22:43.743 --rc genhtml_legend=1 00:22:43.743 --rc geninfo_all_blocks=1 00:22:43.743 --rc geninfo_unexecuted_blocks=1 00:22:43.743 00:22:43.743 ' 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.743 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.744 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.878 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:51.879 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:51.879 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:51.879 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:51.879 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:22:51.879 00:22:51.879 --- 10.0.0.2 ping statistics --- 00:22:51.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.879 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:22:51.879 00:22:51.879 --- 10.0.0.1 ping statistics --- 00:22:51.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.879 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2843856 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2843856 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 2843856 ']' 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.879 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:51.880 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.880 [2024-11-20 06:33:11.647528] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:22:51.880 [2024-11-20 06:33:11.647599] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:51.880 [2024-11-20 06:33:11.754573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.880 [2024-11-20 06:33:11.814821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.880 [2024-11-20 06:33:11.814869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.880 [2024-11-20 06:33:11.814878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.880 [2024-11-20 06:33:11.814885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.880 [2024-11-20 06:33:11.814891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.880 [2024-11-20 06:33:11.816392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:51.880 [2024-11-20 06:33:11.816550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:51.880 [2024-11-20 06:33:11.816710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.880 [2024-11-20 06:33:11.816709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.449 [2024-11-20 06:33:12.516862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.449 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.449 Malloc0 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.450 [2024-11-20 06:33:12.570804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.450 { 00:22:52.450 "params": { 00:22:52.450 "name": "Nvme$subsystem", 00:22:52.450 "trtype": "$TEST_TRANSPORT", 00:22:52.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.450 "adrfam": "ipv4", 00:22:52.450 "trsvcid": "$NVMF_PORT", 00:22:52.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.450 "hdgst": ${hdgst:-false}, 00:22:52.450 "ddgst": ${ddgst:-false} 00:22:52.450 }, 00:22:52.450 "method": "bdev_nvme_attach_controller" 00:22:52.450 } 00:22:52.450 EOF 00:22:52.450 )") 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:52.450 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:52.450 "params": { 00:22:52.450 "name": "Nvme1", 00:22:52.450 "trtype": "tcp", 00:22:52.450 "traddr": "10.0.0.2", 00:22:52.450 "adrfam": "ipv4", 00:22:52.450 "trsvcid": "4420", 00:22:52.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.450 "hdgst": false, 00:22:52.450 "ddgst": false 00:22:52.450 }, 00:22:52.450 "method": "bdev_nvme_attach_controller" 00:22:52.450 }' 00:22:52.450 [2024-11-20 06:33:12.630286] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:22:52.450 [2024-11-20 06:33:12.630357] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2844199 ] 00:22:52.711 [2024-11-20 06:33:12.730783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:52.711 [2024-11-20 06:33:12.791504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.711 [2024-11-20 06:33:12.791670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.711 [2024-11-20 06:33:12.791670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.711 I/O targets: 00:22:52.711 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:52.711 00:22:52.711 00:22:52.711 CUnit - A unit testing framework for C - Version 2.1-3 00:22:52.711 http://cunit.sourceforge.net/ 00:22:52.711 00:22:52.711 00:22:52.711 Suite: bdevio tests on: Nvme1n1 00:22:52.971 Test: blockdev write read block ...passed 00:22:52.971 Test: blockdev write zeroes read block ...passed 00:22:52.971 Test: blockdev write zeroes read no split ...passed 00:22:52.971 Test: blockdev write zeroes read split ...passed 00:22:52.971 Test: blockdev write zeroes read split partial ...passed 00:22:52.971 Test: blockdev reset ...[2024-11-20 06:33:13.194881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:52.971 [2024-11-20 06:33:13.194985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff8800 (9): Bad file descriptor 00:22:53.232 [2024-11-20 06:33:13.250587] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:53.232 passed 00:22:53.232 Test: blockdev write read 8 blocks ...passed 00:22:53.232 Test: blockdev write read size > 128k ...passed 00:22:53.232 Test: blockdev write read invalid size ...passed 00:22:53.232 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:53.232 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:53.232 Test: blockdev write read max offset ...passed 00:22:53.232 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:53.232 Test: blockdev writev readv 8 blocks ...passed 00:22:53.232 Test: blockdev writev readv 30 x 1block ...passed 00:22:53.232 Test: blockdev writev readv block ...passed 00:22:53.232 Test: blockdev writev readv size > 128k ...passed 00:22:53.232 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:53.232 Test: blockdev comparev and writev ...[2024-11-20 06:33:13.476261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.232 [2024-11-20 06:33:13.476312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:53.232 [2024-11-20 06:33:13.476330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.232 [2024-11-20 06:33:13.476340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:53.232 [2024-11-20 06:33:13.476866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.232 [2024-11-20 06:33:13.476883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:53.232 [2024-11-20 06:33:13.476898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.233 [2024-11-20 06:33:13.476907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:53.233 [2024-11-20 06:33:13.477448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.233 [2024-11-20 06:33:13.477463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:53.233 [2024-11-20 06:33:13.477478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.233 [2024-11-20 06:33:13.477488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:53.233 [2024-11-20 06:33:13.478042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.233 [2024-11-20 06:33:13.478056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:53.233 [2024-11-20 06:33:13.478070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.233 [2024-11-20 06:33:13.478079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:53.493 passed 00:22:53.493 Test: blockdev nvme passthru rw ...passed 00:22:53.493 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:33:13.562886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.493 [2024-11-20 06:33:13.562906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:53.493 [2024-11-20 06:33:13.563316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.493 [2024-11-20 06:33:13.563334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:53.493 [2024-11-20 06:33:13.563738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.493 [2024-11-20 06:33:13.563752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:53.493 [2024-11-20 06:33:13.564149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.493 [2024-11-20 06:33:13.564168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:53.493 passed 00:22:53.493 Test: blockdev nvme admin passthru ...passed 00:22:53.493 Test: blockdev copy ...passed 00:22:53.493 00:22:53.493 Run Summary: Type Total Ran Passed Failed Inactive 00:22:53.493 suites 1 1 n/a 0 0 00:22:53.493 tests 23 23 23 0 0 00:22:53.493 asserts 152 152 152 0 n/a 00:22:53.493 00:22:53.493 Elapsed time = 1.302 seconds 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:53.754 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:53.754 rmmod nvme_tcp 00:22:53.754 rmmod nvme_fabrics 00:22:53.754 rmmod nvme_keyring 00:22:53.754 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.754 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:53.754 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:53.754 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2843856 ']' 00:22:53.754 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2843856 00:22:53.754 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 2843856 ']' 00:22:53.754 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 2843856 00:22:53.754 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:22:53.754 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:53.754 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2843856 00:22:54.015 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:22:54.015 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:22:54.015 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2843856' 00:22:54.015 killing process with pid 2843856 00:22:54.015 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 2843856 00:22:54.015 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 2843856 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.275 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.821 00:22:56.821 real 0m12.791s 00:22:56.821 user 0m14.622s 00:22:56.821 sys 0m6.847s 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:56.821 ************************************ 00:22:56.821 END TEST nvmf_bdevio_no_huge 00:22:56.821 ************************************ 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:56.821 ************************************ 00:22:56.821 START TEST nvmf_tls 00:22:56.821 ************************************ 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:56.821 * Looking for test storage... 00:22:56.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:56.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.821 --rc genhtml_branch_coverage=1 00:22:56.821 --rc genhtml_function_coverage=1 00:22:56.821 --rc genhtml_legend=1 00:22:56.821 --rc geninfo_all_blocks=1 00:22:56.821 --rc geninfo_unexecuted_blocks=1 00:22:56.821 00:22:56.821 ' 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:56.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.821 --rc genhtml_branch_coverage=1 00:22:56.821 --rc genhtml_function_coverage=1 00:22:56.821 --rc genhtml_legend=1 00:22:56.821 --rc geninfo_all_blocks=1 00:22:56.821 --rc geninfo_unexecuted_blocks=1 00:22:56.821 00:22:56.821 ' 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:56.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.821 --rc genhtml_branch_coverage=1 00:22:56.821 --rc genhtml_function_coverage=1 00:22:56.821 --rc genhtml_legend=1 00:22:56.821 --rc geninfo_all_blocks=1 00:22:56.821 --rc geninfo_unexecuted_blocks=1 00:22:56.821 00:22:56.821 ' 00:22:56.821 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:56.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.821 --rc genhtml_branch_coverage=1 00:22:56.821 --rc genhtml_function_coverage=1 00:22:56.821 --rc genhtml_legend=1 00:22:56.821 --rc geninfo_all_blocks=1 00:22:56.821 --rc geninfo_unexecuted_blocks=1 00:22:56.821 00:22:56.822 ' 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:56.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.822 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:04.967 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:04.967 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:04.967 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:04.967 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.967 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:04.968 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:04.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:23:04.968 00:23:04.968 --- 10.0.0.2 ping statistics --- 00:23:04.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.968 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:23:04.968 00:23:04.968 --- 10.0.0.1 ping statistics --- 00:23:04.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.968 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2848607 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2848607 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2848607 ']' 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:04.968 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.968 [2024-11-20 06:33:24.401337] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:04.968 [2024-11-20 06:33:24.401408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.968 [2024-11-20 06:33:24.504524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.968 [2024-11-20 06:33:24.555208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.968 [2024-11-20 06:33:24.555264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.968 [2024-11-20 06:33:24.555272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.968 [2024-11-20 06:33:24.555280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.968 [2024-11-20 06:33:24.555286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.968 [2024-11-20 06:33:24.556084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.968 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:04.968 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:04.968 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:04.968 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:04.968 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.230 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.230 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:05.230 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:05.230 true 00:23:05.230 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.230 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:05.492 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:05.492 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:05.492 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:05.754 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.754 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:06.015 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:06.015 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:06.015 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:06.015 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:06.015 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:06.276 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:06.276 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:06.276 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:06.276 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:06.536 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:06.536 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:06.536 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:06.536 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:06.536 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:06.797 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:06.797 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:06.797 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:07.057 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:07.318 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:07.318 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:07.318 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.nRFf1hWNzp 00:23:07.318 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:07.318 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.EcTtccluBk 00:23:07.318 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:07.318 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:07.319 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.nRFf1hWNzp 00:23:07.319 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.EcTtccluBk 00:23:07.319 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:07.319 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:07.579 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.nRFf1hWNzp 00:23:07.579 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nRFf1hWNzp 00:23:07.579 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.838 [2024-11-20 06:33:27.907123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.838 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:07.838 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:08.098 [2024-11-20 06:33:28.227904] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.098 [2024-11-20 06:33:28.228117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.098 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:08.358 malloc0 00:23:08.358 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:08.358 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nRFf1hWNzp 00:23:08.618 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.618 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.nRFf1hWNzp 00:23:20.880 Initializing NVMe Controllers 00:23:20.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:20.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:20.880 Initialization complete. Launching workers. 00:23:20.880 ======================================================== 00:23:20.880 Latency(us) 00:23:20.880 Device Information : IOPS MiB/s Average min max 00:23:20.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18528.87 72.38 3454.30 939.00 4198.03 00:23:20.880 ======================================================== 00:23:20.880 Total : 18528.87 72.38 3454.30 939.00 4198.03 00:23:20.880 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nRFf1hWNzp 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nRFf1hWNzp 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2851621 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2851621 /var/tmp/bdevperf.sock 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2851621 ']' 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:20.880 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.880 [2024-11-20 06:33:39.040248] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:20.880 [2024-11-20 06:33:39.040303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851621 ] 00:23:20.880 [2024-11-20 06:33:39.127935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.880 [2024-11-20 06:33:39.162905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.880 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:20.880 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:20.880 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nRFf1hWNzp 00:23:20.880 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.880 [2024-11-20 06:33:40.164124] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.880 TLSTESTn1 00:23:20.880 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:20.880 Running I/O for 10 seconds... 00:23:22.524 4976.00 IOPS, 19.44 MiB/s [2024-11-20T05:33:43.374Z] 4914.50 IOPS, 19.20 MiB/s [2024-11-20T05:33:44.760Z] 5094.33 IOPS, 19.90 MiB/s [2024-11-20T05:33:45.702Z] 5328.25 IOPS, 20.81 MiB/s [2024-11-20T05:33:46.724Z] 5398.20 IOPS, 21.09 MiB/s [2024-11-20T05:33:47.692Z] 5380.33 IOPS, 21.02 MiB/s [2024-11-20T05:33:48.634Z] 5462.14 IOPS, 21.34 MiB/s [2024-11-20T05:33:49.575Z] 5505.12 IOPS, 21.50 MiB/s [2024-11-20T05:33:50.515Z] 5372.00 IOPS, 20.98 MiB/s [2024-11-20T05:33:50.515Z] 5412.80 IOPS, 21.14 MiB/s 00:23:30.237 Latency(us) 00:23:30.237 [2024-11-20T05:33:50.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.237 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:30.237 Verification LBA range: start 0x0 length 0x2000 00:23:30.237 TLSTESTn1 : 10.04 5401.93 21.10 0.00 0.00 23629.31 5925.55 65972.91 00:23:30.237 [2024-11-20T05:33:50.516Z] =================================================================================================================== 00:23:30.237 [2024-11-20T05:33:50.516Z] Total : 5401.93 21.10 0.00 0.00 23629.31 5925.55 65972.91 00:23:30.237 { 00:23:30.237 "results": [ 00:23:30.237 { 00:23:30.237 "job": "TLSTESTn1", 00:23:30.237 "core_mask": "0x4", 00:23:30.237 "workload": "verify", 00:23:30.237 "status": "finished", 00:23:30.237 "verify_range": { 00:23:30.237 "start": 0, 00:23:30.237 "length": 8192 00:23:30.237 }, 00:23:30.237 "queue_depth": 128, 00:23:30.237 "io_size": 4096, 00:23:30.237 "runtime": 10.043638, 00:23:30.237 "iops": 5401.927070649102, 00:23:30.237 "mibps": 21.101277619723053, 00:23:30.237 "io_failed": 0, 00:23:30.237 "io_timeout": 0, 00:23:30.237 "avg_latency_us": 23629.307486007434, 00:23:30.237 "min_latency_us": 5925.546666666667, 00:23:30.237 "max_latency_us": 65972.90666666666 00:23:30.237 } 00:23:30.237 ], 00:23:30.237 "core_count": 1 00:23:30.237 } 00:23:30.237 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.237 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2851621 00:23:30.237 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2851621 ']' 00:23:30.237 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2851621 00:23:30.237 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:30.237 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.237 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2851621 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2851621' 00:23:30.498 killing process with pid 2851621 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2851621 00:23:30.498 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.498 00:23:30.498 Latency(us) 00:23:30.498 [2024-11-20T05:33:50.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.498 [2024-11-20T05:33:50.777Z] =================================================================================================================== 00:23:30.498 [2024-11-20T05:33:50.777Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2851621 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EcTtccluBk 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EcTtccluBk 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EcTtccluBk 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EcTtccluBk 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2853802 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2853802 /var/tmp/bdevperf.sock 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2853802 ']' 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:30.498 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.498 [2024-11-20 06:33:50.668143] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:30.498 [2024-11-20 06:33:50.668214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853802 ] 00:23:30.498 [2024-11-20 06:33:50.750908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.758 [2024-11-20 06:33:50.779875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.329 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:31.329 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:31.329 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EcTtccluBk 00:23:31.588 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:31.589 [2024-11-20 06:33:51.755652] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.589 [2024-11-20 06:33:51.760192] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:31.589 [2024-11-20 06:33:51.760805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6dbb0 (107): Transport endpoint is not connected 00:23:31.589 [2024-11-20 06:33:51.761800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6dbb0 (9): Bad file descriptor 00:23:31.589 [2024-11-20 06:33:51.762801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:31.589 [2024-11-20 06:33:51.762810] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:31.589 [2024-11-20 06:33:51.762816] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:31.589 [2024-11-20 06:33:51.762825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:31.589 request: 00:23:31.589 { 00:23:31.589 "name": "TLSTEST", 00:23:31.589 "trtype": "tcp", 00:23:31.589 "traddr": "10.0.0.2", 00:23:31.589 "adrfam": "ipv4", 00:23:31.589 "trsvcid": "4420", 00:23:31.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.589 "prchk_reftag": false, 00:23:31.589 "prchk_guard": false, 00:23:31.589 "hdgst": false, 00:23:31.589 "ddgst": false, 00:23:31.589 "psk": "key0", 00:23:31.589 "allow_unrecognized_csi": false, 00:23:31.589 "method": "bdev_nvme_attach_controller", 00:23:31.589 "req_id": 1 00:23:31.589 } 00:23:31.589 Got JSON-RPC error response 00:23:31.589 response: 00:23:31.589 { 00:23:31.589 "code": -5, 00:23:31.589 "message": "Input/output error" 00:23:31.589 } 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2853802 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2853802 ']' 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2853802 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2853802 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2853802' 00:23:31.589 killing process with pid 2853802 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2853802 00:23:31.589 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.589 00:23:31.589 Latency(us) 00:23:31.589 [2024-11-20T05:33:51.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.589 [2024-11-20T05:33:51.868Z] =================================================================================================================== 00:23:31.589 [2024-11-20T05:33:51.868Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.589 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2853802 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nRFf1hWNzp 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nRFf1hWNzp 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nRFf1hWNzp 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nRFf1hWNzp 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2853993 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2853993 /var/tmp/bdevperf.sock 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2853993 ']' 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:31.850 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.850 [2024-11-20 06:33:51.991839] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:31.850 [2024-11-20 06:33:51.991897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853993 ] 00:23:31.850 [2024-11-20 06:33:52.074647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.850 [2024-11-20 06:33:52.103684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.791 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:32.791 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:32.791 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nRFf1hWNzp 00:23:32.791 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:33.051 [2024-11-20 06:33:53.115430] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.051 [2024-11-20 06:33:53.122974] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:33.051 [2024-11-20 06:33:53.122994] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:33.051 [2024-11-20 06:33:53.123012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:33.051 [2024-11-20 06:33:53.123404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a8bb0 (107): Transport endpoint is not connected 00:23:33.051 [2024-11-20 06:33:53.124401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a8bb0 (9): Bad file descriptor 00:23:33.051 [2024-11-20 06:33:53.125403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:33.051 [2024-11-20 06:33:53.125414] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:33.051 [2024-11-20 06:33:53.125420] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:33.051 [2024-11-20 06:33:53.125428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:33.051 request: 00:23:33.051 { 00:23:33.051 "name": "TLSTEST", 00:23:33.051 "trtype": "tcp", 00:23:33.051 "traddr": "10.0.0.2", 00:23:33.051 "adrfam": "ipv4", 00:23:33.051 "trsvcid": "4420", 00:23:33.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.051 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:33.051 "prchk_reftag": false, 00:23:33.051 "prchk_guard": false, 00:23:33.051 "hdgst": false, 00:23:33.051 "ddgst": false, 00:23:33.051 "psk": "key0", 00:23:33.051 "allow_unrecognized_csi": false, 00:23:33.051 "method": "bdev_nvme_attach_controller", 00:23:33.051 "req_id": 1 00:23:33.051 } 00:23:33.051 Got JSON-RPC error response 00:23:33.051 response: 00:23:33.051 { 00:23:33.051 "code": -5, 00:23:33.051 "message": "Input/output error" 00:23:33.051 } 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2853993 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2853993 ']' 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2853993 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2853993 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2853993' 00:23:33.051 killing process with pid 2853993 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2853993 00:23:33.051 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.051 00:23:33.051 Latency(us) 00:23:33.051 [2024-11-20T05:33:53.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.051 [2024-11-20T05:33:53.330Z] =================================================================================================================== 00:23:33.051 [2024-11-20T05:33:53.330Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2853993 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nRFf1hWNzp 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nRFf1hWNzp 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nRFf1hWNzp 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:33.051 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nRFf1hWNzp 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2854335 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2854335 /var/tmp/bdevperf.sock 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2854335 ']' 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:33.052 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.312 [2024-11-20 06:33:53.373091] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:33.312 [2024-11-20 06:33:53.373143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854335 ] 00:23:33.312 [2024-11-20 06:33:53.456605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.312 [2024-11-20 06:33:53.484930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.253 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.253 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:34.253 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nRFf1hWNzp 00:23:34.253 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.253 [2024-11-20 06:33:54.516592] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.253 [2024-11-20 06:33:54.526402] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:34.253 [2024-11-20 06:33:54.526420] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:34.253 [2024-11-20 06:33:54.526439] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:34.253 [2024-11-20 06:33:54.526903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e2bb0 (107): Transport endpoint is not connected 00:23:34.253 [2024-11-20 06:33:54.527899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e2bb0 (9): Bad file descriptor 00:23:34.253 [2024-11-20 06:33:54.528901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:34.253 [2024-11-20 06:33:54.528910] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:34.253 [2024-11-20 06:33:54.528915] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:34.253 [2024-11-20 06:33:54.528923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:34.513 request: 00:23:34.513 { 00:23:34.513 "name": "TLSTEST", 00:23:34.513 "trtype": "tcp", 00:23:34.513 "traddr": "10.0.0.2", 00:23:34.513 "adrfam": "ipv4", 00:23:34.513 "trsvcid": "4420", 00:23:34.513 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:34.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.514 "prchk_reftag": false, 00:23:34.514 "prchk_guard": false, 00:23:34.514 "hdgst": false, 00:23:34.514 "ddgst": false, 00:23:34.514 "psk": "key0", 00:23:34.514 "allow_unrecognized_csi": false, 00:23:34.514 "method": "bdev_nvme_attach_controller", 00:23:34.514 "req_id": 1 00:23:34.514 } 00:23:34.514 Got JSON-RPC error response 00:23:34.514 response: 00:23:34.514 { 00:23:34.514 "code": -5, 00:23:34.514 "message": "Input/output error" 00:23:34.514 } 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2854335 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2854335 ']' 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2854335 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2854335 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2854335' 00:23:34.514 killing process with pid 2854335 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2854335 00:23:34.514 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.514 00:23:34.514 Latency(us) 00:23:34.514 [2024-11-20T05:33:54.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.514 [2024-11-20T05:33:54.793Z] =================================================================================================================== 00:23:34.514 [2024-11-20T05:33:54.793Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2854335 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2854675 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2854675 /var/tmp/bdevperf.sock 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2854675 ']' 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:34.514 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.514 [2024-11-20 06:33:54.774822] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:34.514 [2024-11-20 06:33:54.774877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854675 ] 00:23:34.775 [2024-11-20 06:33:54.860130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.775 [2024-11-20 06:33:54.887685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.345 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:35.345 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:35.345 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:35.605 [2024-11-20 06:33:55.726661] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:35.605 [2024-11-20 06:33:55.726688] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:35.605 request: 00:23:35.605 { 00:23:35.605 "name": "key0", 00:23:35.605 "path": "", 00:23:35.605 "method": "keyring_file_add_key", 00:23:35.605 "req_id": 1 00:23:35.605 } 00:23:35.605 Got JSON-RPC error response 00:23:35.605 response: 00:23:35.605 { 00:23:35.605 "code": -1, 00:23:35.605 "message": "Operation not permitted" 00:23:35.605 } 00:23:35.605 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.867 [2024-11-20 06:33:55.907194] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.867 [2024-11-20 06:33:55.907217] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:35.867 request: 00:23:35.867 { 00:23:35.867 "name": "TLSTEST", 00:23:35.867 "trtype": "tcp", 00:23:35.867 "traddr": "10.0.0.2", 00:23:35.867 "adrfam": "ipv4", 00:23:35.867 "trsvcid": "4420", 00:23:35.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.867 "prchk_reftag": false, 00:23:35.867 "prchk_guard": false, 00:23:35.867 "hdgst": false, 00:23:35.867 "ddgst": false, 00:23:35.867 "psk": "key0", 00:23:35.867 "allow_unrecognized_csi": false, 00:23:35.867 "method": "bdev_nvme_attach_controller", 00:23:35.867 "req_id": 1 00:23:35.867 } 00:23:35.867 Got JSON-RPC error response 00:23:35.867 response: 00:23:35.867 { 00:23:35.867 "code": -126, 00:23:35.867 "message": "Required key not available" 00:23:35.867 } 00:23:35.867 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2854675 00:23:35.867 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2854675 ']' 00:23:35.867 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2854675 00:23:35.867 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:35.867 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:35.867 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2854675 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2854675' 00:23:35.867 killing process with pid 2854675 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2854675 00:23:35.867 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.867 00:23:35.867 Latency(us) 00:23:35.867 [2024-11-20T05:33:56.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.867 [2024-11-20T05:33:56.146Z] =================================================================================================================== 00:23:35.867 [2024-11-20T05:33:56.146Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2854675 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2848607 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2848607 ']' 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2848607 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:35.867 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2848607 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2848607' 00:23:36.128 killing process with pid 2848607 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2848607 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2848607 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ocnidqRYCY 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ocnidqRYCY 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2855027 00:23:36.128 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2855027 00:23:36.129 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.129 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2855027 ']' 00:23:36.129 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.129 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:36.129 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.129 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:36.129 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.129 [2024-11-20 06:33:56.386026] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:36.129 [2024-11-20 06:33:56.386091] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.389 [2024-11-20 06:33:56.477873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.389 [2024-11-20 06:33:56.510175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.389 [2024-11-20 06:33:56.510207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.389 [2024-11-20 06:33:56.510213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.389 [2024-11-20 06:33:56.510218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.389 [2024-11-20 06:33:56.510222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.389 [2024-11-20 06:33:56.510699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.960 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:36.960 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:36.960 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.960 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.960 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.960 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.960 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ocnidqRYCY 00:23:36.960 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ocnidqRYCY 00:23:36.960 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.221 [2024-11-20 06:33:57.377427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.221 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.482 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.482 [2024-11-20 06:33:57.738307] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.482 [2024-11-20 06:33:57.738506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.743 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.743 malloc0 00:23:37.743 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.003 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ocnidqRYCY 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ocnidqRYCY 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ocnidqRYCY 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2855399 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2855399 /var/tmp/bdevperf.sock 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2855399 ']' 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.264 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:38.265 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.265 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:38.265 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.265 [2024-11-20 06:33:58.535853] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:38.265 [2024-11-20 06:33:58.535905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855399 ] 00:23:38.525 [2024-11-20 06:33:58.619160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.525 [2024-11-20 06:33:58.648101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.525 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:38.525 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:38.525 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ocnidqRYCY 00:23:38.785 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.056 [2024-11-20 06:33:59.062330] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.056 TLSTESTn1 00:23:39.056 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:39.056 Running I/O for 10 seconds... 00:23:41.386 6165.00 IOPS, 24.08 MiB/s [2024-11-20T05:34:02.604Z] 5403.00 IOPS, 21.11 MiB/s [2024-11-20T05:34:03.545Z] 5416.00 IOPS, 21.16 MiB/s [2024-11-20T05:34:04.485Z] 5555.25 IOPS, 21.70 MiB/s [2024-11-20T05:34:05.429Z] 5626.20 IOPS, 21.98 MiB/s [2024-11-20T05:34:06.372Z] 5480.50 IOPS, 21.41 MiB/s [2024-11-20T05:34:07.314Z] 5453.57 IOPS, 21.30 MiB/s [2024-11-20T05:34:08.698Z] 5498.12 IOPS, 21.48 MiB/s [2024-11-20T05:34:09.640Z] 5571.33 IOPS, 21.76 MiB/s [2024-11-20T05:34:09.640Z] 5503.10 IOPS, 21.50 MiB/s 00:23:49.361 Latency(us) 00:23:49.361 [2024-11-20T05:34:09.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.361 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:49.361 Verification LBA range: start 0x0 length 0x2000 00:23:49.361 TLSTESTn1 : 10.02 5505.77 21.51 0.00 0.00 23214.08 4833.28 34952.53 00:23:49.361 [2024-11-20T05:34:09.640Z] =================================================================================================================== 00:23:49.361 [2024-11-20T05:34:09.640Z] Total : 5505.77 21.51 0.00 0.00 23214.08 4833.28 34952.53 00:23:49.361 { 00:23:49.361 "results": [ 00:23:49.361 { 00:23:49.361 "job": "TLSTESTn1", 00:23:49.361 "core_mask": "0x4", 00:23:49.361 "workload": "verify", 00:23:49.361 "status": "finished", 00:23:49.361 "verify_range": { 00:23:49.361 "start": 0, 00:23:49.361 "length": 8192 00:23:49.361 }, 00:23:49.361 "queue_depth": 128, 00:23:49.361 "io_size": 4096, 00:23:49.361 "runtime": 10.018212, 00:23:49.361 "iops": 5505.772886419253, 00:23:49.361 "mibps": 21.506925337575208, 00:23:49.361 "io_failed": 0, 00:23:49.361 "io_timeout": 0, 00:23:49.361 "avg_latency_us": 23214.08433179835, 00:23:49.361 "min_latency_us": 4833.28, 00:23:49.361 "max_latency_us": 34952.53333333333 00:23:49.361 } 00:23:49.361 ], 00:23:49.361 "core_count": 1 00:23:49.361 } 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2855399 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2855399 ']' 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2855399 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2855399 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2855399' 00:23:49.361 killing process with pid 2855399 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2855399 00:23:49.361 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.361 00:23:49.361 Latency(us) 00:23:49.361 [2024-11-20T05:34:09.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.361 [2024-11-20T05:34:09.640Z] =================================================================================================================== 00:23:49.361 [2024-11-20T05:34:09.640Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2855399 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ocnidqRYCY 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ocnidqRYCY 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ocnidqRYCY 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ocnidqRYCY 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ocnidqRYCY 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2857459 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2857459 /var/tmp/bdevperf.sock 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2857459 ']' 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:49.361 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.361 [2024-11-20 06:34:09.555954] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:49.361 [2024-11-20 06:34:09.556011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857459 ] 00:23:49.622 [2024-11-20 06:34:09.638457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.622 [2024-11-20 06:34:09.666922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.192 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:50.192 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:50.192 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ocnidqRYCY 00:23:50.452 [2024-11-20 06:34:10.494181] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ocnidqRYCY': 0100666 00:23:50.452 [2024-11-20 06:34:10.494207] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:50.452 request: 00:23:50.452 { 00:23:50.452 "name": "key0", 00:23:50.452 "path": "/tmp/tmp.ocnidqRYCY", 00:23:50.452 "method": "keyring_file_add_key", 00:23:50.452 "req_id": 1 00:23:50.452 } 00:23:50.452 Got JSON-RPC error response 00:23:50.452 response: 00:23:50.452 { 00:23:50.452 "code": -1, 00:23:50.452 "message": "Operation not permitted" 00:23:50.452 } 00:23:50.452 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.452 [2024-11-20 06:34:10.674709] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.452 [2024-11-20 06:34:10.674735] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:50.452 request: 00:23:50.452 { 00:23:50.452 "name": "TLSTEST", 00:23:50.452 "trtype": "tcp", 00:23:50.452 "traddr": "10.0.0.2", 00:23:50.452 "adrfam": "ipv4", 00:23:50.452 "trsvcid": "4420", 00:23:50.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.452 "prchk_reftag": false, 00:23:50.452 "prchk_guard": false, 00:23:50.452 "hdgst": false, 00:23:50.452 "ddgst": false, 00:23:50.452 "psk": "key0", 00:23:50.452 "allow_unrecognized_csi": false, 00:23:50.453 "method": "bdev_nvme_attach_controller", 00:23:50.453 "req_id": 1 00:23:50.453 } 00:23:50.453 Got JSON-RPC error response 00:23:50.453 response: 00:23:50.453 { 00:23:50.453 "code": -126, 00:23:50.453 "message": "Required key not available" 00:23:50.453 } 00:23:50.453 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2857459 00:23:50.453 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2857459 ']' 00:23:50.453 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2857459 00:23:50.453 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:50.453 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:50.453 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2857459 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2857459' 00:23:50.713 killing process with pid 2857459 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2857459 00:23:50.713 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.713 00:23:50.713 Latency(us) 00:23:50.713 [2024-11-20T05:34:10.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.713 [2024-11-20T05:34:10.992Z] =================================================================================================================== 00:23:50.713 [2024-11-20T05:34:10.992Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2857459 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2855027 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2855027 ']' 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2855027 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2855027 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2855027' 00:23:50.713 killing process with pid 2855027 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2855027 00:23:50.713 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2855027 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2857763 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2857763 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2857763 ']' 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:50.975 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.975 [2024-11-20 06:34:11.109953] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:50.975 [2024-11-20 06:34:11.110005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.975 [2024-11-20 06:34:11.202731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.975 [2024-11-20 06:34:11.231726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.975 [2024-11-20 06:34:11.231761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.975 [2024-11-20 06:34:11.231766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.975 [2024-11-20 06:34:11.231771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.975 [2024-11-20 06:34:11.231776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.975 [2024-11-20 06:34:11.232243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ocnidqRYCY 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ocnidqRYCY 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ocnidqRYCY 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ocnidqRYCY 00:23:51.916 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:51.916 [2024-11-20 06:34:12.096658] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.916 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:52.176 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:52.176 [2024-11-20 06:34:12.445509] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.176 [2024-11-20 06:34:12.445696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.436 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:52.436 malloc0 00:23:52.436 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:52.696 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ocnidqRYCY 00:23:52.696 [2024-11-20 06:34:12.972489] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ocnidqRYCY': 0100666 00:23:52.696 [2024-11-20 06:34:12.972509] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:52.956 request: 00:23:52.956 { 00:23:52.957 "name": "key0", 00:23:52.957 "path": "/tmp/tmp.ocnidqRYCY", 00:23:52.957 "method": "keyring_file_add_key", 00:23:52.957 "req_id": 1 00:23:52.957 } 00:23:52.957 Got JSON-RPC error response 00:23:52.957 response: 00:23:52.957 { 00:23:52.957 "code": -1, 00:23:52.957 "message": "Operation not permitted" 00:23:52.957 } 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:52.957 [2024-11-20 06:34:13.152950] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:52.957 [2024-11-20 06:34:13.152978] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:52.957 request: 00:23:52.957 { 00:23:52.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.957 "host": "nqn.2016-06.io.spdk:host1", 00:23:52.957 "psk": "key0", 00:23:52.957 "method": "nvmf_subsystem_add_host", 00:23:52.957 "req_id": 1 00:23:52.957 } 00:23:52.957 Got JSON-RPC error response 00:23:52.957 response: 00:23:52.957 { 00:23:52.957 "code": -32603, 00:23:52.957 "message": "Internal error" 00:23:52.957 } 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2857763 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2857763 ']' 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2857763 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:52.957 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2857763 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2857763' 00:23:53.218 killing process with pid 2857763 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2857763 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2857763 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ocnidqRYCY 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2858381 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2858381 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2858381 ']' 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:53.218 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.218 [2024-11-20 06:34:13.423057] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:53.218 [2024-11-20 06:34:13.423111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.478 [2024-11-20 06:34:13.513228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.478 [2024-11-20 06:34:13.541713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.478 [2024-11-20 06:34:13.541748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.478 [2024-11-20 06:34:13.541753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.478 [2024-11-20 06:34:13.541759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.478 [2024-11-20 06:34:13.541763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.478 [2024-11-20 06:34:13.542216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:54.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:54.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:54.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ocnidqRYCY 00:23:54.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ocnidqRYCY 00:23:54.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.308 [2024-11-20 06:34:14.422393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.308 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:54.569 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:54.569 [2024-11-20 06:34:14.775261] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.569 [2024-11-20 06:34:14.775442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.569 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:54.828 malloc0 00:23:54.828 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:55.088 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ocnidqRYCY 00:23:55.088 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.347 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:55.347 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2858818 00:23:55.347 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.347 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2858818 /var/tmp/bdevperf.sock 00:23:55.347 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2858818 ']' 00:23:55.347 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.347 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:55.348 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.348 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:55.348 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.348 [2024-11-20 06:34:15.579291] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:55.348 [2024-11-20 06:34:15.579343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858818 ] 00:23:55.608 [2024-11-20 06:34:15.639848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.608 [2024-11-20 06:34:15.668569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.608 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:55.608 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:55.608 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ocnidqRYCY 00:23:55.869 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.869 [2024-11-20 06:34:16.086864] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.130 TLSTESTn1 00:23:56.130 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:56.392 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:56.392 "subsystems": [ 00:23:56.392 { 00:23:56.392 "subsystem": "keyring", 00:23:56.392 "config": [ 00:23:56.392 { 00:23:56.392 "method": "keyring_file_add_key", 00:23:56.392 "params": { 00:23:56.392 "name": "key0", 00:23:56.392 "path": "/tmp/tmp.ocnidqRYCY" 00:23:56.392 } 00:23:56.392 } 00:23:56.392 ] 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "subsystem": "iobuf", 00:23:56.392 "config": [ 00:23:56.392 { 00:23:56.392 "method": "iobuf_set_options", 00:23:56.392 "params": { 00:23:56.392 "small_pool_count": 8192, 00:23:56.392 "large_pool_count": 1024, 00:23:56.392 "small_bufsize": 8192, 00:23:56.392 "large_bufsize": 135168, 00:23:56.392 "enable_numa": false 00:23:56.392 } 00:23:56.392 } 00:23:56.392 ] 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "subsystem": "sock", 00:23:56.392 "config": [ 00:23:56.392 { 00:23:56.392 "method": "sock_set_default_impl", 00:23:56.392 "params": { 00:23:56.392 "impl_name": "posix" 00:23:56.392 } 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "method": "sock_impl_set_options", 00:23:56.392 "params": { 00:23:56.392 "impl_name": "ssl", 00:23:56.392 "recv_buf_size": 4096, 00:23:56.392 "send_buf_size": 4096, 00:23:56.392 "enable_recv_pipe": true, 00:23:56.392 "enable_quickack": false, 00:23:56.392 "enable_placement_id": 0, 00:23:56.392 "enable_zerocopy_send_server": true, 00:23:56.392 "enable_zerocopy_send_client": false, 00:23:56.392 "zerocopy_threshold": 0, 00:23:56.392 "tls_version": 0, 00:23:56.392 "enable_ktls": false 00:23:56.392 } 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "method": "sock_impl_set_options", 00:23:56.392 "params": { 00:23:56.392 "impl_name": "posix", 00:23:56.392 "recv_buf_size": 2097152, 00:23:56.392 "send_buf_size": 2097152, 00:23:56.392 "enable_recv_pipe": true, 00:23:56.392 "enable_quickack": false, 00:23:56.392 "enable_placement_id": 0, 00:23:56.392 "enable_zerocopy_send_server": true, 00:23:56.392 "enable_zerocopy_send_client": false, 00:23:56.392 "zerocopy_threshold": 0, 00:23:56.392 "tls_version": 0, 00:23:56.392 "enable_ktls": false 00:23:56.392 } 00:23:56.392 } 00:23:56.392 ] 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "subsystem": "vmd", 00:23:56.392 "config": [] 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "subsystem": "accel", 00:23:56.392 "config": [ 00:23:56.392 { 00:23:56.392 "method": "accel_set_options", 00:23:56.392 "params": { 00:23:56.392 "small_cache_size": 128, 00:23:56.392 "large_cache_size": 16, 00:23:56.392 "task_count": 2048, 00:23:56.392 "sequence_count": 2048, 00:23:56.392 "buf_count": 2048 00:23:56.392 } 00:23:56.392 } 00:23:56.392 ] 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "subsystem": "bdev", 00:23:56.392 "config": [ 00:23:56.392 { 00:23:56.392 "method": "bdev_set_options", 00:23:56.392 "params": { 00:23:56.392 "bdev_io_pool_size": 65535, 00:23:56.392 "bdev_io_cache_size": 256, 00:23:56.392 "bdev_auto_examine": true, 00:23:56.392 "iobuf_small_cache_size": 128, 00:23:56.392 "iobuf_large_cache_size": 16 00:23:56.392 } 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "method": "bdev_raid_set_options", 00:23:56.392 "params": { 00:23:56.392 "process_window_size_kb": 1024, 00:23:56.392 "process_max_bandwidth_mb_sec": 0 00:23:56.392 } 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "method": "bdev_iscsi_set_options", 00:23:56.392 "params": { 00:23:56.392 "timeout_sec": 30 00:23:56.392 } 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "method": "bdev_nvme_set_options", 00:23:56.392 "params": { 00:23:56.392 "action_on_timeout": "none", 00:23:56.392 "timeout_us": 0, 00:23:56.392 "timeout_admin_us": 0, 00:23:56.392 "keep_alive_timeout_ms": 10000, 00:23:56.392 "arbitration_burst": 0, 00:23:56.392 "low_priority_weight": 0, 00:23:56.392 "medium_priority_weight": 0, 00:23:56.392 "high_priority_weight": 0, 00:23:56.392 "nvme_adminq_poll_period_us": 10000, 00:23:56.392 "nvme_ioq_poll_period_us": 0, 00:23:56.392 "io_queue_requests": 0, 00:23:56.392 "delay_cmd_submit": true, 00:23:56.392 "transport_retry_count": 4, 00:23:56.392 "bdev_retry_count": 3, 00:23:56.392 "transport_ack_timeout": 0, 00:23:56.392 "ctrlr_loss_timeout_sec": 0, 00:23:56.392 "reconnect_delay_sec": 0, 00:23:56.392 "fast_io_fail_timeout_sec": 0, 00:23:56.392 "disable_auto_failback": false, 00:23:56.392 "generate_uuids": false, 00:23:56.392 "transport_tos": 0, 00:23:56.392 "nvme_error_stat": false, 00:23:56.392 "rdma_srq_size": 0, 00:23:56.392 "io_path_stat": false, 00:23:56.392 "allow_accel_sequence": false, 00:23:56.392 "rdma_max_cq_size": 0, 00:23:56.392 "rdma_cm_event_timeout_ms": 0, 00:23:56.392 "dhchap_digests": [ 00:23:56.392 "sha256", 00:23:56.392 "sha384", 00:23:56.392 "sha512" 00:23:56.392 ], 00:23:56.392 "dhchap_dhgroups": [ 00:23:56.392 "null", 00:23:56.392 "ffdhe2048", 00:23:56.392 "ffdhe3072", 00:23:56.392 "ffdhe4096", 00:23:56.392 "ffdhe6144", 00:23:56.392 "ffdhe8192" 00:23:56.392 ] 00:23:56.392 } 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "method": "bdev_nvme_set_hotplug", 00:23:56.392 "params": { 00:23:56.392 "period_us": 100000, 00:23:56.392 "enable": false 00:23:56.392 } 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "method": "bdev_malloc_create", 00:23:56.392 "params": { 00:23:56.392 "name": "malloc0", 00:23:56.392 "num_blocks": 8192, 00:23:56.392 "block_size": 4096, 00:23:56.392 "physical_block_size": 4096, 00:23:56.392 "uuid": "e000db2f-b657-4b54-85c9-f5d484b98dfe", 00:23:56.392 "optimal_io_boundary": 0, 00:23:56.392 "md_size": 0, 00:23:56.392 "dif_type": 0, 00:23:56.392 "dif_is_head_of_md": false, 00:23:56.392 "dif_pi_format": 0 00:23:56.392 } 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "method": "bdev_wait_for_examine" 00:23:56.392 } 00:23:56.392 ] 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "subsystem": "nbd", 00:23:56.392 "config": [] 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "subsystem": "scheduler", 00:23:56.392 "config": [ 00:23:56.392 { 00:23:56.392 "method": "framework_set_scheduler", 00:23:56.392 "params": { 00:23:56.392 "name": "static" 00:23:56.392 } 00:23:56.392 } 00:23:56.392 ] 00:23:56.392 }, 00:23:56.392 { 00:23:56.392 "subsystem": "nvmf", 00:23:56.392 "config": [ 00:23:56.392 { 00:23:56.392 "method": "nvmf_set_config", 00:23:56.392 "params": { 00:23:56.393 "discovery_filter": "match_any", 00:23:56.393 "admin_cmd_passthru": { 00:23:56.393 "identify_ctrlr": false 00:23:56.393 }, 00:23:56.393 "dhchap_digests": [ 00:23:56.393 "sha256", 00:23:56.393 "sha384", 00:23:56.393 "sha512" 00:23:56.393 ], 00:23:56.393 "dhchap_dhgroups": [ 00:23:56.393 "null", 00:23:56.393 "ffdhe2048", 00:23:56.393 "ffdhe3072", 00:23:56.393 "ffdhe4096", 00:23:56.393 "ffdhe6144", 00:23:56.393 "ffdhe8192" 00:23:56.393 ] 00:23:56.393 } 00:23:56.393 }, 00:23:56.393 { 00:23:56.393 "method": "nvmf_set_max_subsystems", 00:23:56.393 "params": { 00:23:56.393 "max_subsystems": 1024 00:23:56.393 } 00:23:56.393 }, 00:23:56.393 { 00:23:56.393 "method": "nvmf_set_crdt", 00:23:56.393 "params": { 00:23:56.393 "crdt1": 0, 00:23:56.393 "crdt2": 0, 00:23:56.393 "crdt3": 0 00:23:56.393 } 00:23:56.393 }, 00:23:56.393 { 00:23:56.393 "method": "nvmf_create_transport", 00:23:56.393 "params": { 00:23:56.393 "trtype": "TCP", 00:23:56.393 "max_queue_depth": 128, 00:23:56.393 "max_io_qpairs_per_ctrlr": 127, 00:23:56.393 "in_capsule_data_size": 4096, 00:23:56.393 "max_io_size": 131072, 00:23:56.393 "io_unit_size": 131072, 00:23:56.393 "max_aq_depth": 128, 00:23:56.393 "num_shared_buffers": 511, 00:23:56.393 "buf_cache_size": 4294967295, 00:23:56.393 "dif_insert_or_strip": false, 00:23:56.393 "zcopy": false, 00:23:56.393 "c2h_success": false, 00:23:56.393 "sock_priority": 0, 00:23:56.393 "abort_timeout_sec": 1, 00:23:56.393 "ack_timeout": 0, 00:23:56.393 "data_wr_pool_size": 0 00:23:56.393 } 00:23:56.393 }, 00:23:56.393 { 00:23:56.393 "method": "nvmf_create_subsystem", 00:23:56.393 "params": { 00:23:56.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.393 "allow_any_host": false, 00:23:56.393 "serial_number": "SPDK00000000000001", 00:23:56.393 "model_number": "SPDK bdev Controller", 00:23:56.393 "max_namespaces": 10, 00:23:56.393 "min_cntlid": 1, 00:23:56.393 "max_cntlid": 65519, 00:23:56.393 "ana_reporting": false 00:23:56.393 } 00:23:56.393 }, 00:23:56.393 { 00:23:56.393 "method": "nvmf_subsystem_add_host", 00:23:56.393 "params": { 00:23:56.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.393 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.393 "psk": "key0" 00:23:56.393 } 00:23:56.393 }, 00:23:56.393 { 00:23:56.393 "method": "nvmf_subsystem_add_ns", 00:23:56.393 "params": { 00:23:56.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.393 "namespace": { 00:23:56.393 "nsid": 1, 00:23:56.393 "bdev_name": "malloc0", 00:23:56.393 "nguid": "E000DB2FB6574B5485C9F5D484B98DFE", 00:23:56.393 "uuid": "e000db2f-b657-4b54-85c9-f5d484b98dfe", 00:23:56.393 "no_auto_visible": false 00:23:56.393 } 00:23:56.393 } 00:23:56.393 }, 00:23:56.393 { 00:23:56.393 "method": "nvmf_subsystem_add_listener", 00:23:56.393 "params": { 00:23:56.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.393 "listen_address": { 00:23:56.393 "trtype": "TCP", 00:23:56.393 "adrfam": "IPv4", 00:23:56.393 "traddr": "10.0.0.2", 00:23:56.393 "trsvcid": "4420" 00:23:56.393 }, 00:23:56.393 "secure_channel": true 00:23:56.393 } 00:23:56.393 } 00:23:56.393 ] 00:23:56.393 } 00:23:56.393 ] 00:23:56.393 }' 00:23:56.393 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:56.654 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:56.654 "subsystems": [ 00:23:56.654 { 00:23:56.654 "subsystem": "keyring", 00:23:56.654 "config": [ 00:23:56.654 { 00:23:56.654 "method": "keyring_file_add_key", 00:23:56.654 "params": { 00:23:56.654 "name": "key0", 00:23:56.654 "path": "/tmp/tmp.ocnidqRYCY" 00:23:56.654 } 00:23:56.654 } 00:23:56.654 ] 00:23:56.654 }, 00:23:56.654 { 00:23:56.654 "subsystem": "iobuf", 00:23:56.654 "config": [ 00:23:56.654 { 00:23:56.654 "method": "iobuf_set_options", 00:23:56.654 "params": { 00:23:56.654 "small_pool_count": 8192, 00:23:56.654 "large_pool_count": 1024, 00:23:56.654 "small_bufsize": 8192, 00:23:56.654 "large_bufsize": 135168, 00:23:56.654 "enable_numa": false 00:23:56.654 } 00:23:56.654 } 00:23:56.654 ] 00:23:56.654 }, 00:23:56.654 { 00:23:56.654 "subsystem": "sock", 00:23:56.654 "config": [ 00:23:56.654 { 00:23:56.654 "method": "sock_set_default_impl", 00:23:56.654 "params": { 00:23:56.654 "impl_name": "posix" 00:23:56.654 } 00:23:56.654 }, 00:23:56.654 { 00:23:56.654 "method": "sock_impl_set_options", 00:23:56.654 "params": { 00:23:56.654 "impl_name": "ssl", 00:23:56.654 "recv_buf_size": 4096, 00:23:56.654 "send_buf_size": 4096, 00:23:56.654 "enable_recv_pipe": true, 00:23:56.654 "enable_quickack": false, 00:23:56.654 "enable_placement_id": 0, 00:23:56.654 "enable_zerocopy_send_server": true, 00:23:56.654 "enable_zerocopy_send_client": false, 00:23:56.654 "zerocopy_threshold": 0, 00:23:56.654 "tls_version": 0, 00:23:56.654 "enable_ktls": false 00:23:56.654 } 00:23:56.654 }, 00:23:56.654 { 00:23:56.654 "method": "sock_impl_set_options", 00:23:56.654 "params": { 00:23:56.655 "impl_name": "posix", 00:23:56.655 "recv_buf_size": 2097152, 00:23:56.655 "send_buf_size": 2097152, 00:23:56.655 "enable_recv_pipe": true, 00:23:56.655 "enable_quickack": false, 00:23:56.655 "enable_placement_id": 0, 00:23:56.655 "enable_zerocopy_send_server": true, 00:23:56.655 "enable_zerocopy_send_client": false, 00:23:56.655 "zerocopy_threshold": 0, 00:23:56.655 "tls_version": 0, 00:23:56.655 "enable_ktls": false 00:23:56.655 } 00:23:56.655 } 00:23:56.655 ] 00:23:56.655 }, 00:23:56.655 { 00:23:56.655 "subsystem": "vmd", 00:23:56.655 "config": [] 00:23:56.655 }, 00:23:56.655 { 00:23:56.655 "subsystem": "accel", 00:23:56.655 "config": [ 00:23:56.655 { 00:23:56.655 "method": "accel_set_options", 00:23:56.655 "params": { 00:23:56.655 "small_cache_size": 128, 00:23:56.655 "large_cache_size": 16, 00:23:56.655 "task_count": 2048, 00:23:56.655 "sequence_count": 2048, 00:23:56.655 "buf_count": 2048 00:23:56.655 } 00:23:56.655 } 00:23:56.655 ] 00:23:56.655 }, 00:23:56.655 { 00:23:56.655 "subsystem": "bdev", 00:23:56.655 "config": [ 00:23:56.655 { 00:23:56.655 "method": "bdev_set_options", 00:23:56.655 "params": { 00:23:56.655 "bdev_io_pool_size": 65535, 00:23:56.655 "bdev_io_cache_size": 256, 00:23:56.655 "bdev_auto_examine": true, 00:23:56.655 "iobuf_small_cache_size": 128, 00:23:56.655 "iobuf_large_cache_size": 16 00:23:56.655 } 00:23:56.655 }, 00:23:56.655 { 00:23:56.655 "method": "bdev_raid_set_options", 00:23:56.655 "params": { 00:23:56.655 "process_window_size_kb": 1024, 00:23:56.655 "process_max_bandwidth_mb_sec": 0 00:23:56.655 } 00:23:56.655 }, 00:23:56.655 { 00:23:56.655 "method": "bdev_iscsi_set_options", 00:23:56.655 "params": { 00:23:56.655 "timeout_sec": 30 00:23:56.655 } 00:23:56.655 }, 00:23:56.655 { 00:23:56.655 "method": "bdev_nvme_set_options", 00:23:56.655 "params": { 00:23:56.655 "action_on_timeout": "none", 00:23:56.655 "timeout_us": 0, 00:23:56.655 "timeout_admin_us": 0, 00:23:56.655 "keep_alive_timeout_ms": 10000, 00:23:56.655 "arbitration_burst": 0, 00:23:56.655 "low_priority_weight": 0, 00:23:56.655 "medium_priority_weight": 0, 00:23:56.655 "high_priority_weight": 0, 00:23:56.655 "nvme_adminq_poll_period_us": 10000, 00:23:56.655 "nvme_ioq_poll_period_us": 0, 00:23:56.655 "io_queue_requests": 512, 00:23:56.655 "delay_cmd_submit": true, 00:23:56.655 "transport_retry_count": 4, 00:23:56.655 "bdev_retry_count": 3, 00:23:56.655 "transport_ack_timeout": 0, 00:23:56.655 "ctrlr_loss_timeout_sec": 0, 00:23:56.655 "reconnect_delay_sec": 0, 00:23:56.655 "fast_io_fail_timeout_sec": 0, 00:23:56.655 "disable_auto_failback": false, 00:23:56.655 "generate_uuids": false, 00:23:56.655 "transport_tos": 0, 00:23:56.655 "nvme_error_stat": false, 00:23:56.655 "rdma_srq_size": 0, 00:23:56.655 "io_path_stat": false, 00:23:56.655 "allow_accel_sequence": false, 00:23:56.655 "rdma_max_cq_size": 0, 00:23:56.655 "rdma_cm_event_timeout_ms": 0, 00:23:56.655 "dhchap_digests": [ 00:23:56.655 "sha256", 00:23:56.655 "sha384", 00:23:56.655 "sha512" 00:23:56.655 ], 00:23:56.655 "dhchap_dhgroups": [ 00:23:56.655 "null", 00:23:56.655 "ffdhe2048", 00:23:56.655 "ffdhe3072", 00:23:56.655 "ffdhe4096", 00:23:56.655 "ffdhe6144", 00:23:56.655 "ffdhe8192" 00:23:56.655 ] 00:23:56.655 } 00:23:56.655 }, 00:23:56.655 { 00:23:56.655 "method": "bdev_nvme_attach_controller", 00:23:56.655 "params": { 00:23:56.655 "name": "TLSTEST", 00:23:56.655 "trtype": "TCP", 00:23:56.655 "adrfam": "IPv4", 00:23:56.655 "traddr": "10.0.0.2", 00:23:56.655 "trsvcid": "4420", 00:23:56.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.655 "prchk_reftag": false, 00:23:56.655 "prchk_guard": false, 00:23:56.655 "ctrlr_loss_timeout_sec": 0, 00:23:56.655 "reconnect_delay_sec": 0, 00:23:56.655 "fast_io_fail_timeout_sec": 0, 00:23:56.655 "psk": "key0", 00:23:56.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.655 "hdgst": false, 00:23:56.655 "ddgst": false, 00:23:56.655 "multipath": "multipath" 00:23:56.655 } 00:23:56.655 }, 00:23:56.655 { 00:23:56.655 "method": "bdev_nvme_set_hotplug", 00:23:56.655 "params": { 00:23:56.655 "period_us": 100000, 00:23:56.655 "enable": false 00:23:56.655 } 00:23:56.655 }, 00:23:56.655 { 00:23:56.655 "method": "bdev_wait_for_examine" 00:23:56.655 } 00:23:56.655 ] 00:23:56.655 }, 00:23:56.655 { 00:23:56.655 "subsystem": "nbd", 00:23:56.655 "config": [] 00:23:56.655 } 00:23:56.655 ] 00:23:56.655 }' 00:23:56.655 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2858818 00:23:56.655 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2858818 ']' 00:23:56.655 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2858818 00:23:56.655 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:56.655 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:56.655 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2858818 00:23:56.655 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:56.655 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:56.655 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2858818' 00:23:56.655 killing process with pid 2858818 00:23:56.655 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2858818 00:23:56.655 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.655 00:23:56.656 Latency(us) 00:23:56.656 [2024-11-20T05:34:16.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.656 [2024-11-20T05:34:16.935Z] =================================================================================================================== 00:23:56.656 [2024-11-20T05:34:16.935Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2858818 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2858381 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2858381 ']' 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2858381 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2858381 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2858381' 00:23:56.656 killing process with pid 2858381 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2858381 00:23:56.656 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2858381 00:23:56.918 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:56.918 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.918 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.918 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.918 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:56.918 "subsystems": [ 00:23:56.918 { 00:23:56.918 "subsystem": "keyring", 00:23:56.918 "config": [ 00:23:56.918 { 00:23:56.918 "method": "keyring_file_add_key", 00:23:56.918 "params": { 00:23:56.918 "name": "key0", 00:23:56.918 "path": "/tmp/tmp.ocnidqRYCY" 00:23:56.918 } 00:23:56.918 } 00:23:56.918 ] 00:23:56.918 }, 00:23:56.918 { 00:23:56.918 "subsystem": "iobuf", 00:23:56.918 "config": [ 00:23:56.918 { 00:23:56.918 "method": "iobuf_set_options", 00:23:56.918 "params": { 00:23:56.918 "small_pool_count": 8192, 00:23:56.918 "large_pool_count": 1024, 00:23:56.918 "small_bufsize": 8192, 00:23:56.918 "large_bufsize": 135168, 00:23:56.918 "enable_numa": false 00:23:56.918 } 00:23:56.918 } 00:23:56.918 ] 00:23:56.918 }, 00:23:56.918 { 00:23:56.918 "subsystem": "sock", 00:23:56.918 "config": [ 00:23:56.918 { 00:23:56.918 "method": "sock_set_default_impl", 00:23:56.918 "params": { 00:23:56.918 "impl_name": "posix" 00:23:56.918 } 00:23:56.918 }, 00:23:56.918 { 00:23:56.918 "method": "sock_impl_set_options", 00:23:56.918 "params": { 00:23:56.918 "impl_name": "ssl", 00:23:56.918 "recv_buf_size": 4096, 00:23:56.918 "send_buf_size": 4096, 00:23:56.918 "enable_recv_pipe": true, 00:23:56.918 "enable_quickack": false, 00:23:56.918 "enable_placement_id": 0, 00:23:56.918 "enable_zerocopy_send_server": true, 00:23:56.918 "enable_zerocopy_send_client": false, 00:23:56.918 "zerocopy_threshold": 0, 00:23:56.918 "tls_version": 0, 00:23:56.918 "enable_ktls": false 00:23:56.918 } 00:23:56.918 }, 00:23:56.918 { 00:23:56.918 "method": "sock_impl_set_options", 00:23:56.918 "params": { 00:23:56.918 "impl_name": "posix", 00:23:56.918 "recv_buf_size": 2097152, 00:23:56.918 "send_buf_size": 2097152, 00:23:56.918 "enable_recv_pipe": true, 00:23:56.918 "enable_quickack": false, 00:23:56.918 "enable_placement_id": 0, 00:23:56.918 "enable_zerocopy_send_server": true, 00:23:56.918 "enable_zerocopy_send_client": false, 00:23:56.918 "zerocopy_threshold": 0, 00:23:56.918 "tls_version": 0, 00:23:56.918 "enable_ktls": false 00:23:56.918 } 00:23:56.918 } 00:23:56.918 ] 00:23:56.918 }, 00:23:56.918 { 00:23:56.918 "subsystem": "vmd", 00:23:56.918 "config": [] 00:23:56.918 }, 00:23:56.918 { 00:23:56.918 "subsystem": "accel", 00:23:56.918 "config": [ 00:23:56.918 { 00:23:56.918 "method": "accel_set_options", 00:23:56.918 "params": { 00:23:56.918 "small_cache_size": 128, 00:23:56.918 "large_cache_size": 16, 00:23:56.918 "task_count": 2048, 00:23:56.918 "sequence_count": 2048, 00:23:56.918 "buf_count": 2048 00:23:56.918 } 00:23:56.918 } 00:23:56.918 ] 00:23:56.918 }, 00:23:56.918 { 00:23:56.918 "subsystem": "bdev", 00:23:56.918 "config": [ 00:23:56.918 { 00:23:56.918 "method": "bdev_set_options", 00:23:56.918 "params": { 00:23:56.918 "bdev_io_pool_size": 65535, 00:23:56.918 "bdev_io_cache_size": 256, 00:23:56.918 "bdev_auto_examine": true, 00:23:56.918 "iobuf_small_cache_size": 128, 00:23:56.918 "iobuf_large_cache_size": 16 00:23:56.918 } 00:23:56.918 }, 00:23:56.918 { 00:23:56.918 "method": "bdev_raid_set_options", 00:23:56.918 "params": { 00:23:56.918 "process_window_size_kb": 1024, 00:23:56.918 "process_max_bandwidth_mb_sec": 0 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "bdev_iscsi_set_options", 00:23:56.919 "params": { 00:23:56.919 "timeout_sec": 30 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "bdev_nvme_set_options", 00:23:56.919 "params": { 00:23:56.919 "action_on_timeout": "none", 00:23:56.919 "timeout_us": 0, 00:23:56.919 "timeout_admin_us": 0, 00:23:56.919 "keep_alive_timeout_ms": 10000, 00:23:56.919 "arbitration_burst": 0, 00:23:56.919 "low_priority_weight": 0, 00:23:56.919 "medium_priority_weight": 0, 00:23:56.919 "high_priority_weight": 0, 00:23:56.919 "nvme_adminq_poll_period_us": 10000, 00:23:56.919 "nvme_ioq_poll_period_us": 0, 00:23:56.919 "io_queue_requests": 0, 00:23:56.919 "delay_cmd_submit": true, 00:23:56.919 "transport_retry_count": 4, 00:23:56.919 "bdev_retry_count": 3, 00:23:56.919 "transport_ack_timeout": 0, 00:23:56.919 "ctrlr_loss_timeout_sec": 0, 00:23:56.919 "reconnect_delay_sec": 0, 00:23:56.919 "fast_io_fail_timeout_sec": 0, 00:23:56.919 "disable_auto_failback": false, 00:23:56.919 "generate_uuids": false, 00:23:56.919 "transport_tos": 0, 00:23:56.919 "nvme_error_stat": false, 00:23:56.919 "rdma_srq_size": 0, 00:23:56.919 "io_path_stat": false, 00:23:56.919 "allow_accel_sequence": false, 00:23:56.919 "rdma_max_cq_size": 0, 00:23:56.919 "rdma_cm_event_timeout_ms": 0, 00:23:56.919 "dhchap_digests": [ 00:23:56.919 "sha256", 00:23:56.919 "sha384", 00:23:56.919 "sha512" 00:23:56.919 ], 00:23:56.919 "dhchap_dhgroups": [ 00:23:56.919 "null", 00:23:56.919 "ffdhe2048", 00:23:56.919 "ffdhe3072", 00:23:56.919 "ffdhe4096", 00:23:56.919 "ffdhe6144", 00:23:56.919 "ffdhe8192" 00:23:56.919 ] 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "bdev_nvme_set_hotplug", 00:23:56.919 "params": { 00:23:56.919 "period_us": 100000, 00:23:56.919 "enable": false 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "bdev_malloc_create", 00:23:56.919 "params": { 00:23:56.919 "name": "malloc0", 00:23:56.919 "num_blocks": 8192, 00:23:56.919 "block_size": 4096, 00:23:56.919 "physical_block_size": 4096, 00:23:56.919 "uuid": "e000db2f-b657-4b54-85c9-f5d484b98dfe", 00:23:56.919 "optimal_io_boundary": 0, 00:23:56.919 "md_size": 0, 00:23:56.919 "dif_type": 0, 00:23:56.919 "dif_is_head_of_md": false, 00:23:56.919 "dif_pi_format": 0 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "bdev_wait_for_examine" 00:23:56.919 } 00:23:56.919 ] 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "subsystem": "nbd", 00:23:56.919 "config": [] 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "subsystem": "scheduler", 00:23:56.919 "config": [ 00:23:56.919 { 00:23:56.919 "method": "framework_set_scheduler", 00:23:56.919 "params": { 00:23:56.919 "name": "static" 00:23:56.919 } 00:23:56.919 } 00:23:56.919 ] 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "subsystem": "nvmf", 00:23:56.919 "config": [ 00:23:56.919 { 00:23:56.919 "method": "nvmf_set_config", 00:23:56.919 "params": { 00:23:56.919 "discovery_filter": "match_any", 00:23:56.919 "admin_cmd_passthru": { 00:23:56.919 "identify_ctrlr": false 00:23:56.919 }, 00:23:56.919 "dhchap_digests": [ 00:23:56.919 "sha256", 00:23:56.919 "sha384", 00:23:56.919 "sha512" 00:23:56.919 ], 00:23:56.919 "dhchap_dhgroups": [ 00:23:56.919 "null", 00:23:56.919 "ffdhe2048", 00:23:56.919 "ffdhe3072", 00:23:56.919 "ffdhe4096", 00:23:56.919 "ffdhe6144", 00:23:56.919 "ffdhe8192" 00:23:56.919 ] 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "nvmf_set_max_subsystems", 00:23:56.919 "params": { 00:23:56.919 "max_subsystems": 1024 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "nvmf_set_crdt", 00:23:56.919 "params": { 00:23:56.919 "crdt1": 0, 00:23:56.919 "crdt2": 0, 00:23:56.919 "crdt3": 0 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "nvmf_create_transport", 00:23:56.919 "params": { 00:23:56.919 "trtype": "TCP", 00:23:56.919 "max_queue_depth": 128, 00:23:56.919 "max_io_qpairs_per_ctrlr": 127, 00:23:56.919 "in_capsule_data_size": 4096, 00:23:56.919 "max_io_size": 131072, 00:23:56.919 "io_unit_size": 131072, 00:23:56.919 "max_aq_depth": 128, 00:23:56.919 "num_shared_buffers": 511, 00:23:56.919 "buf_cache_size": 4294967295, 00:23:56.919 "dif_insert_or_strip": false, 00:23:56.919 "zcopy": false, 00:23:56.919 "c2h_success": false, 00:23:56.919 "sock_priority": 0, 00:23:56.919 "abort_timeout_sec": 1, 00:23:56.919 "ack_timeout": 0, 00:23:56.919 "data_wr_pool_size": 0 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "nvmf_create_subsystem", 00:23:56.919 "params": { 00:23:56.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.919 "allow_any_host": false, 00:23:56.919 "serial_number": "SPDK00000000000001", 00:23:56.919 "model_number": "SPDK bdev Controller", 00:23:56.919 "max_namespaces": 10, 00:23:56.919 "min_cntlid": 1, 00:23:56.919 "max_cntlid": 65519, 00:23:56.919 "ana_reporting": false 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "nvmf_subsystem_add_host", 00:23:56.919 "params": { 00:23:56.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.919 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.919 "psk": "key0" 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "nvmf_subsystem_add_ns", 00:23:56.919 "params": { 00:23:56.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.919 "namespace": { 00:23:56.919 "nsid": 1, 00:23:56.919 "bdev_name": "malloc0", 00:23:56.919 "nguid": "E000DB2FB6574B5485C9F5D484B98DFE", 00:23:56.919 "uuid": "e000db2f-b657-4b54-85c9-f5d484b98dfe", 00:23:56.919 "no_auto_visible": false 00:23:56.919 } 00:23:56.919 } 00:23:56.919 }, 00:23:56.919 { 00:23:56.919 "method": "nvmf_subsystem_add_listener", 00:23:56.919 "params": { 00:23:56.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.919 "listen_address": { 00:23:56.919 "trtype": "TCP", 00:23:56.919 "adrfam": "IPv4", 00:23:56.919 "traddr": "10.0.0.2", 00:23:56.919 "trsvcid": "4420" 00:23:56.919 }, 00:23:56.919 "secure_channel": true 00:23:56.919 } 00:23:56.919 } 00:23:56.919 ] 00:23:56.919 } 00:23:56.919 ] 00:23:56.919 }' 00:23:56.919 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2859174 00:23:56.919 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2859174 00:23:56.919 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:56.919 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2859174 ']' 00:23:56.919 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.919 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:56.919 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.920 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:56.920 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.920 [2024-11-20 06:34:17.093890] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:56.920 [2024-11-20 06:34:17.093945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.920 [2024-11-20 06:34:17.184877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.180 [2024-11-20 06:34:17.213823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.180 [2024-11-20 06:34:17.213856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.180 [2024-11-20 06:34:17.213862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.180 [2024-11-20 06:34:17.213866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.180 [2024-11-20 06:34:17.213870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.180 [2024-11-20 06:34:17.214369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.180 [2024-11-20 06:34:17.408039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.180 [2024-11-20 06:34:17.440064] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.180 [2024-11-20 06:34:17.440272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2859198 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2859198 /var/tmp/bdevperf.sock 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2859198 ']' 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.751 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:57.751 "subsystems": [ 00:23:57.751 { 00:23:57.751 "subsystem": "keyring", 00:23:57.751 "config": [ 00:23:57.751 { 00:23:57.751 "method": "keyring_file_add_key", 00:23:57.751 "params": { 00:23:57.752 "name": "key0", 00:23:57.752 "path": "/tmp/tmp.ocnidqRYCY" 00:23:57.752 } 00:23:57.752 } 00:23:57.752 ] 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "subsystem": "iobuf", 00:23:57.752 "config": [ 00:23:57.752 { 00:23:57.752 "method": "iobuf_set_options", 00:23:57.752 "params": { 00:23:57.752 "small_pool_count": 8192, 00:23:57.752 "large_pool_count": 1024, 00:23:57.752 "small_bufsize": 8192, 00:23:57.752 "large_bufsize": 135168, 00:23:57.752 "enable_numa": false 00:23:57.752 } 00:23:57.752 } 00:23:57.752 ] 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "subsystem": "sock", 00:23:57.752 "config": [ 00:23:57.752 { 00:23:57.752 "method": "sock_set_default_impl", 00:23:57.752 "params": { 00:23:57.752 "impl_name": "posix" 00:23:57.752 } 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "method": "sock_impl_set_options", 00:23:57.752 "params": { 00:23:57.752 "impl_name": "ssl", 00:23:57.752 "recv_buf_size": 4096, 00:23:57.752 "send_buf_size": 4096, 00:23:57.752 "enable_recv_pipe": true, 00:23:57.752 "enable_quickack": false, 00:23:57.752 "enable_placement_id": 0, 00:23:57.752 "enable_zerocopy_send_server": true, 00:23:57.752 "enable_zerocopy_send_client": false, 00:23:57.752 "zerocopy_threshold": 0, 00:23:57.752 "tls_version": 0, 00:23:57.752 "enable_ktls": false 00:23:57.752 } 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "method": "sock_impl_set_options", 00:23:57.752 "params": { 00:23:57.752 "impl_name": "posix", 00:23:57.752 "recv_buf_size": 2097152, 00:23:57.752 "send_buf_size": 2097152, 00:23:57.752 "enable_recv_pipe": true, 00:23:57.752 "enable_quickack": false, 00:23:57.752 "enable_placement_id": 0, 00:23:57.752 "enable_zerocopy_send_server": true, 00:23:57.752 "enable_zerocopy_send_client": false, 00:23:57.752 "zerocopy_threshold": 0, 00:23:57.752 "tls_version": 0, 00:23:57.752 "enable_ktls": false 00:23:57.752 } 00:23:57.752 } 00:23:57.752 ] 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "subsystem": "vmd", 00:23:57.752 "config": [] 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "subsystem": "accel", 00:23:57.752 "config": [ 00:23:57.752 { 00:23:57.752 "method": "accel_set_options", 00:23:57.752 "params": { 00:23:57.752 "small_cache_size": 128, 00:23:57.752 "large_cache_size": 16, 00:23:57.752 "task_count": 2048, 00:23:57.752 "sequence_count": 2048, 00:23:57.752 "buf_count": 2048 00:23:57.752 } 00:23:57.752 } 00:23:57.752 ] 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "subsystem": "bdev", 00:23:57.752 "config": [ 00:23:57.752 { 00:23:57.752 "method": "bdev_set_options", 00:23:57.752 "params": { 00:23:57.752 "bdev_io_pool_size": 65535, 00:23:57.752 "bdev_io_cache_size": 256, 00:23:57.752 "bdev_auto_examine": true, 00:23:57.752 "iobuf_small_cache_size": 128, 00:23:57.752 "iobuf_large_cache_size": 16 00:23:57.752 } 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "method": "bdev_raid_set_options", 00:23:57.752 "params": { 00:23:57.752 "process_window_size_kb": 1024, 00:23:57.752 "process_max_bandwidth_mb_sec": 0 00:23:57.752 } 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "method": "bdev_iscsi_set_options", 00:23:57.752 "params": { 00:23:57.752 "timeout_sec": 30 00:23:57.752 } 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "method": "bdev_nvme_set_options", 00:23:57.752 "params": { 00:23:57.752 "action_on_timeout": "none", 00:23:57.752 "timeout_us": 0, 00:23:57.752 "timeout_admin_us": 0, 00:23:57.752 "keep_alive_timeout_ms": 10000, 00:23:57.752 "arbitration_burst": 0, 00:23:57.752 "low_priority_weight": 0, 00:23:57.752 "medium_priority_weight": 0, 00:23:57.752 "high_priority_weight": 0, 00:23:57.752 "nvme_adminq_poll_period_us": 10000, 00:23:57.752 "nvme_ioq_poll_period_us": 0, 00:23:57.752 "io_queue_requests": 512, 00:23:57.752 "delay_cmd_submit": true, 00:23:57.752 "transport_retry_count": 4, 00:23:57.752 "bdev_retry_count": 3, 00:23:57.752 "transport_ack_timeout": 0, 00:23:57.752 "ctrlr_loss_timeout_sec": 0, 00:23:57.752 "reconnect_delay_sec": 0, 00:23:57.752 "fast_io_fail_timeout_sec": 0, 00:23:57.752 "disable_auto_failback": false, 00:23:57.752 "generate_uuids": false, 00:23:57.752 "transport_tos": 0, 00:23:57.752 "nvme_error_stat": false, 00:23:57.752 "rdma_srq_size": 0, 00:23:57.752 "io_path_stat": false, 00:23:57.752 "allow_accel_sequence": false, 00:23:57.752 "rdma_max_cq_size": 0, 00:23:57.752 "rdma_cm_event_timeout_ms": 0, 00:23:57.752 "dhchap_digests": [ 00:23:57.752 "sha256", 00:23:57.752 "sha384", 00:23:57.752 "sha512" 00:23:57.752 ], 00:23:57.752 "dhchap_dhgroups": [ 00:23:57.752 "null", 00:23:57.752 "ffdhe2048", 00:23:57.752 "ffdhe3072", 00:23:57.752 "ffdhe4096", 00:23:57.752 "ffdhe6144", 00:23:57.752 "ffdhe8192" 00:23:57.752 ] 00:23:57.752 } 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "method": "bdev_nvme_attach_controller", 00:23:57.752 "params": { 00:23:57.752 "name": "TLSTEST", 00:23:57.752 "trtype": "TCP", 00:23:57.752 "adrfam": "IPv4", 00:23:57.752 "traddr": "10.0.0.2", 00:23:57.752 "trsvcid": "4420", 00:23:57.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.752 "prchk_reftag": false, 00:23:57.752 "prchk_guard": false, 00:23:57.752 "ctrlr_loss_timeout_sec": 0, 00:23:57.752 "reconnect_delay_sec": 0, 00:23:57.752 "fast_io_fail_timeout_sec": 0, 00:23:57.752 "psk": "key0", 00:23:57.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.752 "hdgst": false, 00:23:57.752 "ddgst": false, 00:23:57.752 "multipath": "multipath" 00:23:57.752 } 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "method": "bdev_nvme_set_hotplug", 00:23:57.752 "params": { 00:23:57.752 "period_us": 100000, 00:23:57.752 "enable": false 00:23:57.752 } 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "method": "bdev_wait_for_examine" 00:23:57.752 } 00:23:57.752 ] 00:23:57.752 }, 00:23:57.752 { 00:23:57.752 "subsystem": "nbd", 00:23:57.752 "config": [] 00:23:57.752 } 00:23:57.752 ] 00:23:57.752 }' 00:23:57.752 [2024-11-20 06:34:17.989845] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:23:57.752 [2024-11-20 06:34:17.989896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859198 ] 00:23:58.013 [2024-11-20 06:34:18.073643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.013 [2024-11-20 06:34:18.102437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.013 [2024-11-20 06:34:18.237274] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.583 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:58.583 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:58.583 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:58.844 Running I/O for 10 seconds... 00:24:00.727 4984.00 IOPS, 19.47 MiB/s [2024-11-20T05:34:21.947Z] 5023.00 IOPS, 19.62 MiB/s [2024-11-20T05:34:22.888Z] 4782.33 IOPS, 18.68 MiB/s [2024-11-20T05:34:24.273Z] 5105.00 IOPS, 19.94 MiB/s [2024-11-20T05:34:25.215Z] 5242.60 IOPS, 20.48 MiB/s [2024-11-20T05:34:26.156Z] 5316.67 IOPS, 20.77 MiB/s [2024-11-20T05:34:27.100Z] 5334.43 IOPS, 20.84 MiB/s [2024-11-20T05:34:28.060Z] 5361.12 IOPS, 20.94 MiB/s [2024-11-20T05:34:29.112Z] 5442.44 IOPS, 21.26 MiB/s [2024-11-20T05:34:29.112Z] 5394.70 IOPS, 21.07 MiB/s 00:24:08.833 Latency(us) 00:24:08.833 [2024-11-20T05:34:29.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.833 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:08.833 Verification LBA range: start 0x0 length 0x2000 00:24:08.833 TLSTESTn1 : 10.03 5392.39 21.06 0.00 0.00 23693.02 4805.97 77769.39 00:24:08.833 [2024-11-20T05:34:29.112Z] =================================================================================================================== 00:24:08.833 [2024-11-20T05:34:29.112Z] Total : 5392.39 21.06 0.00 0.00 23693.02 4805.97 77769.39 00:24:08.833 { 00:24:08.833 "results": [ 00:24:08.833 { 00:24:08.833 "job": "TLSTESTn1", 00:24:08.833 "core_mask": "0x4", 00:24:08.833 "workload": "verify", 00:24:08.833 "status": "finished", 00:24:08.833 "verify_range": { 00:24:08.833 "start": 0, 00:24:08.833 "length": 8192 00:24:08.833 }, 00:24:08.833 "queue_depth": 128, 00:24:08.833 "io_size": 4096, 00:24:08.833 "runtime": 10.027844, 00:24:08.833 "iops": 5392.385441975363, 00:24:08.833 "mibps": 21.064005632716263, 00:24:08.833 "io_failed": 0, 00:24:08.833 "io_timeout": 0, 00:24:08.833 "avg_latency_us": 23693.022162715293, 00:24:08.833 "min_latency_us": 4805.973333333333, 00:24:08.833 "max_latency_us": 77769.38666666667 00:24:08.833 } 00:24:08.833 ], 00:24:08.833 "core_count": 1 00:24:08.833 } 00:24:08.833 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.833 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2859198 00:24:08.833 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2859198 ']' 00:24:08.833 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2859198 00:24:08.833 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:08.833 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:08.833 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2859198 00:24:08.833 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:08.833 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:08.833 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2859198' 00:24:08.833 killing process with pid 2859198 00:24:08.833 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2859198 00:24:08.833 Received shutdown signal, test time was about 10.000000 seconds 00:24:08.833 00:24:08.833 Latency(us) 00:24:08.833 [2024-11-20T05:34:29.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.833 [2024-11-20T05:34:29.112Z] =================================================================================================================== 00:24:08.833 [2024-11-20T05:34:29.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.833 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2859198 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2859174 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2859174 ']' 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2859174 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2859174 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2859174' 00:24:09.095 killing process with pid 2859174 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2859174 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2859174 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2861542 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2861542 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2861542 ']' 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:09.095 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.095 [2024-11-20 06:34:29.353350] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:24:09.095 [2024-11-20 06:34:29.353404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.355 [2024-11-20 06:34:29.446833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.355 [2024-11-20 06:34:29.492042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.355 [2024-11-20 06:34:29.492100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.355 [2024-11-20 06:34:29.492109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.355 [2024-11-20 06:34:29.492116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.355 [2024-11-20 06:34:29.492123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.355 [2024-11-20 06:34:29.492834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.926 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:09.926 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:09.927 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.927 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:09.927 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.187 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.187 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ocnidqRYCY 00:24:10.187 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ocnidqRYCY 00:24:10.187 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:10.187 [2024-11-20 06:34:30.380047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.187 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:10.452 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:10.713 [2024-11-20 06:34:30.781079] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.713 [2024-11-20 06:34:30.781424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.713 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:10.973 malloc0 00:24:10.973 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:10.973 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ocnidqRYCY 00:24:11.234 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:11.496 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2861916 00:24:11.496 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:11.496 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.496 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2861916 /var/tmp/bdevperf.sock 00:24:11.496 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2861916 ']' 00:24:11.496 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.496 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:11.496 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.496 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:11.496 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.496 [2024-11-20 06:34:31.639451] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:24:11.496 [2024-11-20 06:34:31.639514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861916 ] 00:24:11.496 [2024-11-20 06:34:31.727527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.496 [2024-11-20 06:34:31.760964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.757 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:11.757 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:11.757 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ocnidqRYCY 00:24:11.757 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:12.020 [2024-11-20 06:34:32.182670] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:12.020 nvme0n1 00:24:12.020 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.281 Running I/O for 1 seconds... 00:24:13.224 3736.00 IOPS, 14.59 MiB/s 00:24:13.224 Latency(us) 00:24:13.224 [2024-11-20T05:34:33.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.224 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:13.224 Verification LBA range: start 0x0 length 0x2000 00:24:13.224 nvme0n1 : 1.02 3794.81 14.82 0.00 0.00 33500.74 4532.91 62914.56 00:24:13.224 [2024-11-20T05:34:33.503Z] =================================================================================================================== 00:24:13.224 [2024-11-20T05:34:33.503Z] Total : 3794.81 14.82 0.00 0.00 33500.74 4532.91 62914.56 00:24:13.224 { 00:24:13.224 "results": [ 00:24:13.224 { 00:24:13.224 "job": "nvme0n1", 00:24:13.224 "core_mask": "0x2", 00:24:13.224 "workload": "verify", 00:24:13.224 "status": "finished", 00:24:13.224 "verify_range": { 00:24:13.224 "start": 0, 00:24:13.224 "length": 8192 00:24:13.224 }, 00:24:13.224 "queue_depth": 128, 00:24:13.224 "io_size": 4096, 00:24:13.224 "runtime": 1.018234, 00:24:13.224 "iops": 3794.8055162172936, 00:24:13.224 "mibps": 14.823459047723803, 00:24:13.224 "io_failed": 0, 00:24:13.224 "io_timeout": 0, 00:24:13.224 "avg_latency_us": 33500.74302277433, 00:24:13.224 "min_latency_us": 4532.906666666667, 00:24:13.224 "max_latency_us": 62914.56 00:24:13.224 } 00:24:13.224 ], 00:24:13.224 "core_count": 1 00:24:13.224 } 00:24:13.224 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2861916 00:24:13.224 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2861916 ']' 00:24:13.225 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2861916 00:24:13.225 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:13.225 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.225 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2861916 00:24:13.225 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:13.225 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:13.225 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2861916' 00:24:13.225 killing process with pid 2861916 00:24:13.225 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2861916 00:24:13.225 Received shutdown signal, test time was about 1.000000 seconds 00:24:13.225 00:24:13.225 Latency(us) 00:24:13.225 [2024-11-20T05:34:33.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.225 [2024-11-20T05:34:33.504Z] =================================================================================================================== 00:24:13.225 [2024-11-20T05:34:33.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.225 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2861916 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2861542 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2861542 ']' 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2861542 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2861542 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2861542' 00:24:13.485 killing process with pid 2861542 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2861542 00:24:13.485 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2861542 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2862286 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2862286 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2862286 ']' 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:13.745 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.745 [2024-11-20 06:34:33.838214] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:24:13.745 [2024-11-20 06:34:33.838285] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.745 [2024-11-20 06:34:33.927841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.745 [2024-11-20 06:34:33.966308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.745 [2024-11-20 06:34:33.966348] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.745 [2024-11-20 06:34:33.966357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.745 [2024-11-20 06:34:33.966364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.745 [2024-11-20 06:34:33.966370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.745 [2024-11-20 06:34:33.966998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.690 [2024-11-20 06:34:34.681959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.690 malloc0 00:24:14.690 [2024-11-20 06:34:34.712073] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.690 [2024-11-20 06:34:34.712417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2862619 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2862619 /var/tmp/bdevperf.sock 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2862619 ']' 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:14.690 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.690 [2024-11-20 06:34:34.804730] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:24:14.690 [2024-11-20 06:34:34.804798] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862619 ] 00:24:14.690 [2024-11-20 06:34:34.891933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.690 [2024-11-20 06:34:34.926726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.633 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:15.633 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:15.633 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ocnidqRYCY 00:24:15.633 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:15.633 [2024-11-20 06:34:35.901575] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.893 nvme0n1 00:24:15.893 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.893 Running I/O for 1 seconds... 00:24:16.833 6009.00 IOPS, 23.47 MiB/s 00:24:16.833 Latency(us) 00:24:16.833 [2024-11-20T05:34:37.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.833 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:16.833 Verification LBA range: start 0x0 length 0x2000 00:24:16.833 nvme0n1 : 1.02 5993.44 23.41 0.00 0.00 21139.03 5761.71 68594.35 00:24:16.833 [2024-11-20T05:34:37.112Z] =================================================================================================================== 00:24:16.833 [2024-11-20T05:34:37.112Z] Total : 5993.44 23.41 0.00 0.00 21139.03 5761.71 68594.35 00:24:16.833 { 00:24:16.833 "results": [ 00:24:16.833 { 00:24:16.833 "job": "nvme0n1", 00:24:16.833 "core_mask": "0x2", 00:24:16.833 "workload": "verify", 00:24:16.833 "status": "finished", 00:24:16.833 "verify_range": { 00:24:16.833 "start": 0, 00:24:16.834 "length": 8192 00:24:16.834 }, 00:24:16.834 "queue_depth": 128, 00:24:16.834 "io_size": 4096, 00:24:16.834 "runtime": 1.02412, 00:24:16.834 "iops": 5993.4382689528575, 00:24:16.834 "mibps": 23.4118682380971, 00:24:16.834 "io_failed": 0, 00:24:16.834 "io_timeout": 0, 00:24:16.834 "avg_latency_us": 21139.03247529054, 00:24:16.834 "min_latency_us": 5761.706666666667, 00:24:16.834 "max_latency_us": 68594.34666666666 00:24:16.834 } 00:24:16.834 ], 00:24:16.834 "core_count": 1 00:24:16.834 } 00:24:17.094 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:17.094 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.094 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.094 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.094 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:17.094 "subsystems": [ 00:24:17.094 { 00:24:17.094 "subsystem": "keyring", 00:24:17.094 "config": [ 00:24:17.094 { 00:24:17.094 "method": "keyring_file_add_key", 00:24:17.094 "params": { 00:24:17.094 "name": "key0", 00:24:17.094 "path": "/tmp/tmp.ocnidqRYCY" 00:24:17.094 } 00:24:17.094 } 00:24:17.094 ] 00:24:17.094 }, 00:24:17.094 { 00:24:17.094 "subsystem": "iobuf", 00:24:17.094 "config": [ 00:24:17.094 { 00:24:17.094 "method": "iobuf_set_options", 00:24:17.094 "params": { 00:24:17.094 "small_pool_count": 8192, 00:24:17.094 "large_pool_count": 1024, 00:24:17.094 "small_bufsize": 8192, 00:24:17.094 "large_bufsize": 135168, 00:24:17.094 "enable_numa": false 00:24:17.094 } 00:24:17.094 } 00:24:17.094 ] 00:24:17.094 }, 00:24:17.094 { 00:24:17.094 "subsystem": "sock", 00:24:17.094 "config": [ 00:24:17.095 { 00:24:17.095 "method": "sock_set_default_impl", 00:24:17.095 "params": { 00:24:17.095 "impl_name": "posix" 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "sock_impl_set_options", 00:24:17.095 "params": { 00:24:17.095 "impl_name": "ssl", 00:24:17.095 "recv_buf_size": 4096, 00:24:17.095 "send_buf_size": 4096, 00:24:17.095 "enable_recv_pipe": true, 00:24:17.095 "enable_quickack": false, 00:24:17.095 "enable_placement_id": 0, 00:24:17.095 "enable_zerocopy_send_server": true, 00:24:17.095 "enable_zerocopy_send_client": false, 00:24:17.095 "zerocopy_threshold": 0, 00:24:17.095 "tls_version": 0, 00:24:17.095 "enable_ktls": false 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "sock_impl_set_options", 00:24:17.095 "params": { 00:24:17.095 "impl_name": "posix", 00:24:17.095 "recv_buf_size": 2097152, 00:24:17.095 "send_buf_size": 2097152, 00:24:17.095 "enable_recv_pipe": true, 00:24:17.095 "enable_quickack": false, 00:24:17.095 "enable_placement_id": 0, 00:24:17.095 "enable_zerocopy_send_server": true, 00:24:17.095 "enable_zerocopy_send_client": false, 00:24:17.095 "zerocopy_threshold": 0, 00:24:17.095 "tls_version": 0, 00:24:17.095 "enable_ktls": false 00:24:17.095 } 00:24:17.095 } 00:24:17.095 ] 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "subsystem": "vmd", 00:24:17.095 "config": [] 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "subsystem": "accel", 00:24:17.095 "config": [ 00:24:17.095 { 00:24:17.095 "method": "accel_set_options", 00:24:17.095 "params": { 00:24:17.095 "small_cache_size": 128, 00:24:17.095 "large_cache_size": 16, 00:24:17.095 "task_count": 2048, 00:24:17.095 "sequence_count": 2048, 00:24:17.095 "buf_count": 2048 00:24:17.095 } 00:24:17.095 } 00:24:17.095 ] 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "subsystem": "bdev", 00:24:17.095 "config": [ 00:24:17.095 { 00:24:17.095 "method": "bdev_set_options", 00:24:17.095 "params": { 00:24:17.095 "bdev_io_pool_size": 65535, 00:24:17.095 "bdev_io_cache_size": 256, 00:24:17.095 "bdev_auto_examine": true, 00:24:17.095 "iobuf_small_cache_size": 128, 00:24:17.095 "iobuf_large_cache_size": 16 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "bdev_raid_set_options", 00:24:17.095 "params": { 00:24:17.095 "process_window_size_kb": 1024, 00:24:17.095 "process_max_bandwidth_mb_sec": 0 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "bdev_iscsi_set_options", 00:24:17.095 "params": { 00:24:17.095 "timeout_sec": 30 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "bdev_nvme_set_options", 00:24:17.095 "params": { 00:24:17.095 "action_on_timeout": "none", 00:24:17.095 "timeout_us": 0, 00:24:17.095 "timeout_admin_us": 0, 00:24:17.095 "keep_alive_timeout_ms": 10000, 00:24:17.095 "arbitration_burst": 0, 00:24:17.095 "low_priority_weight": 0, 00:24:17.095 "medium_priority_weight": 0, 00:24:17.095 "high_priority_weight": 0, 00:24:17.095 "nvme_adminq_poll_period_us": 10000, 00:24:17.095 "nvme_ioq_poll_period_us": 0, 00:24:17.095 "io_queue_requests": 0, 00:24:17.095 "delay_cmd_submit": true, 00:24:17.095 "transport_retry_count": 4, 00:24:17.095 "bdev_retry_count": 3, 00:24:17.095 "transport_ack_timeout": 0, 00:24:17.095 "ctrlr_loss_timeout_sec": 0, 00:24:17.095 "reconnect_delay_sec": 0, 00:24:17.095 "fast_io_fail_timeout_sec": 0, 00:24:17.095 "disable_auto_failback": false, 00:24:17.095 "generate_uuids": false, 00:24:17.095 "transport_tos": 0, 00:24:17.095 "nvme_error_stat": false, 00:24:17.095 "rdma_srq_size": 0, 00:24:17.095 "io_path_stat": false, 00:24:17.095 "allow_accel_sequence": false, 00:24:17.095 "rdma_max_cq_size": 0, 00:24:17.095 "rdma_cm_event_timeout_ms": 0, 00:24:17.095 "dhchap_digests": [ 00:24:17.095 "sha256", 00:24:17.095 "sha384", 00:24:17.095 "sha512" 00:24:17.095 ], 00:24:17.095 "dhchap_dhgroups": [ 00:24:17.095 "null", 00:24:17.095 "ffdhe2048", 00:24:17.095 "ffdhe3072", 00:24:17.095 "ffdhe4096", 00:24:17.095 "ffdhe6144", 00:24:17.095 "ffdhe8192" 00:24:17.095 ] 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "bdev_nvme_set_hotplug", 00:24:17.095 "params": { 00:24:17.095 "period_us": 100000, 00:24:17.095 "enable": false 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "bdev_malloc_create", 00:24:17.095 "params": { 00:24:17.095 "name": "malloc0", 00:24:17.095 "num_blocks": 8192, 00:24:17.095 "block_size": 4096, 00:24:17.095 "physical_block_size": 4096, 00:24:17.095 "uuid": "ddc80c19-697a-4d94-9215-5984d4fc474a", 00:24:17.095 "optimal_io_boundary": 0, 00:24:17.095 "md_size": 0, 00:24:17.095 "dif_type": 0, 00:24:17.095 "dif_is_head_of_md": false, 00:24:17.095 "dif_pi_format": 0 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "bdev_wait_for_examine" 00:24:17.095 } 00:24:17.095 ] 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "subsystem": "nbd", 00:24:17.095 "config": [] 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "subsystem": "scheduler", 00:24:17.095 "config": [ 00:24:17.095 { 00:24:17.095 "method": "framework_set_scheduler", 00:24:17.095 "params": { 00:24:17.095 "name": "static" 00:24:17.095 } 00:24:17.095 } 00:24:17.095 ] 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "subsystem": "nvmf", 00:24:17.095 "config": [ 00:24:17.095 { 00:24:17.095 "method": "nvmf_set_config", 00:24:17.095 "params": { 00:24:17.095 "discovery_filter": "match_any", 00:24:17.095 "admin_cmd_passthru": { 00:24:17.095 "identify_ctrlr": false 00:24:17.095 }, 00:24:17.095 "dhchap_digests": [ 00:24:17.095 "sha256", 00:24:17.095 "sha384", 00:24:17.095 "sha512" 00:24:17.095 ], 00:24:17.095 "dhchap_dhgroups": [ 00:24:17.095 "null", 00:24:17.095 "ffdhe2048", 00:24:17.095 "ffdhe3072", 00:24:17.095 "ffdhe4096", 00:24:17.095 "ffdhe6144", 00:24:17.095 "ffdhe8192" 00:24:17.095 ] 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "nvmf_set_max_subsystems", 00:24:17.095 "params": { 00:24:17.095 "max_subsystems": 1024 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "nvmf_set_crdt", 00:24:17.095 "params": { 00:24:17.095 "crdt1": 0, 00:24:17.095 "crdt2": 0, 00:24:17.095 "crdt3": 0 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "nvmf_create_transport", 00:24:17.095 "params": { 00:24:17.095 "trtype": "TCP", 00:24:17.095 "max_queue_depth": 128, 00:24:17.095 "max_io_qpairs_per_ctrlr": 127, 00:24:17.095 "in_capsule_data_size": 4096, 00:24:17.095 "max_io_size": 131072, 00:24:17.095 "io_unit_size": 131072, 00:24:17.095 "max_aq_depth": 128, 00:24:17.095 "num_shared_buffers": 511, 00:24:17.095 "buf_cache_size": 4294967295, 00:24:17.095 "dif_insert_or_strip": false, 00:24:17.095 "zcopy": false, 00:24:17.095 "c2h_success": false, 00:24:17.095 "sock_priority": 0, 00:24:17.095 "abort_timeout_sec": 1, 00:24:17.095 "ack_timeout": 0, 00:24:17.095 "data_wr_pool_size": 0 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "nvmf_create_subsystem", 00:24:17.095 "params": { 00:24:17.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.095 "allow_any_host": false, 00:24:17.095 "serial_number": "00000000000000000000", 00:24:17.095 "model_number": "SPDK bdev Controller", 00:24:17.095 "max_namespaces": 32, 00:24:17.095 "min_cntlid": 1, 00:24:17.095 "max_cntlid": 65519, 00:24:17.095 "ana_reporting": false 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "nvmf_subsystem_add_host", 00:24:17.095 "params": { 00:24:17.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.095 "host": "nqn.2016-06.io.spdk:host1", 00:24:17.095 "psk": "key0" 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "nvmf_subsystem_add_ns", 00:24:17.095 "params": { 00:24:17.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.095 "namespace": { 00:24:17.095 "nsid": 1, 00:24:17.095 "bdev_name": "malloc0", 00:24:17.095 "nguid": "DDC80C19697A4D9492155984D4FC474A", 00:24:17.095 "uuid": "ddc80c19-697a-4d94-9215-5984d4fc474a", 00:24:17.095 "no_auto_visible": false 00:24:17.095 } 00:24:17.095 } 00:24:17.095 }, 00:24:17.095 { 00:24:17.095 "method": "nvmf_subsystem_add_listener", 00:24:17.095 "params": { 00:24:17.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.095 "listen_address": { 00:24:17.095 "trtype": "TCP", 00:24:17.095 "adrfam": "IPv4", 00:24:17.095 "traddr": "10.0.0.2", 00:24:17.095 "trsvcid": "4420" 00:24:17.095 }, 00:24:17.095 "secure_channel": false, 00:24:17.095 "sock_impl": "ssl" 00:24:17.095 } 00:24:17.095 } 00:24:17.095 ] 00:24:17.096 } 00:24:17.096 ] 00:24:17.096 }' 00:24:17.096 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:17.357 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:17.357 "subsystems": [ 00:24:17.357 { 00:24:17.357 "subsystem": "keyring", 00:24:17.357 "config": [ 00:24:17.357 { 00:24:17.357 "method": "keyring_file_add_key", 00:24:17.357 "params": { 00:24:17.357 "name": "key0", 00:24:17.357 "path": "/tmp/tmp.ocnidqRYCY" 00:24:17.357 } 00:24:17.357 } 00:24:17.357 ] 00:24:17.357 }, 00:24:17.357 { 00:24:17.357 "subsystem": "iobuf", 00:24:17.357 "config": [ 00:24:17.357 { 00:24:17.357 "method": "iobuf_set_options", 00:24:17.357 "params": { 00:24:17.357 "small_pool_count": 8192, 00:24:17.357 "large_pool_count": 1024, 00:24:17.357 "small_bufsize": 8192, 00:24:17.357 "large_bufsize": 135168, 00:24:17.357 "enable_numa": false 00:24:17.357 } 00:24:17.357 } 00:24:17.357 ] 00:24:17.357 }, 00:24:17.357 { 00:24:17.357 "subsystem": "sock", 00:24:17.357 "config": [ 00:24:17.357 { 00:24:17.357 "method": "sock_set_default_impl", 00:24:17.357 "params": { 00:24:17.357 "impl_name": "posix" 00:24:17.357 } 00:24:17.357 }, 00:24:17.357 { 00:24:17.357 "method": "sock_impl_set_options", 00:24:17.357 "params": { 00:24:17.357 "impl_name": "ssl", 00:24:17.357 "recv_buf_size": 4096, 00:24:17.357 "send_buf_size": 4096, 00:24:17.357 "enable_recv_pipe": true, 00:24:17.357 "enable_quickack": false, 00:24:17.357 "enable_placement_id": 0, 00:24:17.357 "enable_zerocopy_send_server": true, 00:24:17.357 "enable_zerocopy_send_client": false, 00:24:17.357 "zerocopy_threshold": 0, 00:24:17.357 "tls_version": 0, 00:24:17.357 "enable_ktls": false 00:24:17.357 } 00:24:17.357 }, 00:24:17.357 { 00:24:17.357 "method": "sock_impl_set_options", 00:24:17.357 "params": { 00:24:17.357 "impl_name": "posix", 00:24:17.357 "recv_buf_size": 2097152, 00:24:17.357 "send_buf_size": 2097152, 00:24:17.357 "enable_recv_pipe": true, 00:24:17.357 "enable_quickack": false, 00:24:17.357 "enable_placement_id": 0, 00:24:17.357 "enable_zerocopy_send_server": true, 00:24:17.357 "enable_zerocopy_send_client": false, 00:24:17.357 "zerocopy_threshold": 0, 00:24:17.357 "tls_version": 0, 00:24:17.357 "enable_ktls": false 00:24:17.357 } 00:24:17.357 } 00:24:17.357 ] 00:24:17.357 }, 00:24:17.357 { 00:24:17.357 "subsystem": "vmd", 00:24:17.357 "config": [] 00:24:17.357 }, 00:24:17.357 { 00:24:17.357 "subsystem": "accel", 00:24:17.357 "config": [ 00:24:17.357 { 00:24:17.357 "method": "accel_set_options", 00:24:17.357 "params": { 00:24:17.357 "small_cache_size": 128, 00:24:17.357 "large_cache_size": 16, 00:24:17.357 "task_count": 2048, 00:24:17.357 "sequence_count": 2048, 00:24:17.357 "buf_count": 2048 00:24:17.357 } 00:24:17.357 } 00:24:17.357 ] 00:24:17.357 }, 00:24:17.357 { 00:24:17.357 "subsystem": "bdev", 00:24:17.357 "config": [ 00:24:17.357 { 00:24:17.357 "method": "bdev_set_options", 00:24:17.357 "params": { 00:24:17.357 "bdev_io_pool_size": 65535, 00:24:17.357 "bdev_io_cache_size": 256, 00:24:17.357 "bdev_auto_examine": true, 00:24:17.357 "iobuf_small_cache_size": 128, 00:24:17.357 "iobuf_large_cache_size": 16 00:24:17.357 } 00:24:17.357 }, 00:24:17.357 { 00:24:17.357 "method": "bdev_raid_set_options", 00:24:17.357 "params": { 00:24:17.357 "process_window_size_kb": 1024, 00:24:17.357 "process_max_bandwidth_mb_sec": 0 00:24:17.357 } 00:24:17.357 }, 00:24:17.357 { 00:24:17.357 "method": "bdev_iscsi_set_options", 00:24:17.357 "params": { 00:24:17.357 "timeout_sec": 30 00:24:17.357 } 00:24:17.357 }, 00:24:17.357 { 00:24:17.357 "method": "bdev_nvme_set_options", 00:24:17.357 "params": { 00:24:17.357 "action_on_timeout": "none", 00:24:17.357 "timeout_us": 0, 00:24:17.357 "timeout_admin_us": 0, 00:24:17.357 "keep_alive_timeout_ms": 10000, 00:24:17.357 "arbitration_burst": 0, 00:24:17.357 "low_priority_weight": 0, 00:24:17.357 "medium_priority_weight": 0, 00:24:17.357 "high_priority_weight": 0, 00:24:17.357 "nvme_adminq_poll_period_us": 10000, 00:24:17.357 "nvme_ioq_poll_period_us": 0, 00:24:17.357 "io_queue_requests": 512, 00:24:17.357 "delay_cmd_submit": true, 00:24:17.358 "transport_retry_count": 4, 00:24:17.358 "bdev_retry_count": 3, 00:24:17.358 "transport_ack_timeout": 0, 00:24:17.358 "ctrlr_loss_timeout_sec": 0, 00:24:17.358 "reconnect_delay_sec": 0, 00:24:17.358 "fast_io_fail_timeout_sec": 0, 00:24:17.358 "disable_auto_failback": false, 00:24:17.358 "generate_uuids": false, 00:24:17.358 "transport_tos": 0, 00:24:17.358 "nvme_error_stat": false, 00:24:17.358 "rdma_srq_size": 0, 00:24:17.358 "io_path_stat": false, 00:24:17.358 "allow_accel_sequence": false, 00:24:17.358 "rdma_max_cq_size": 0, 00:24:17.358 "rdma_cm_event_timeout_ms": 0, 00:24:17.358 "dhchap_digests": [ 00:24:17.358 "sha256", 00:24:17.358 "sha384", 00:24:17.358 "sha512" 00:24:17.358 ], 00:24:17.358 "dhchap_dhgroups": [ 00:24:17.358 "null", 00:24:17.358 "ffdhe2048", 00:24:17.358 "ffdhe3072", 00:24:17.358 "ffdhe4096", 00:24:17.358 "ffdhe6144", 00:24:17.358 "ffdhe8192" 00:24:17.358 ] 00:24:17.358 } 00:24:17.358 }, 00:24:17.358 { 00:24:17.358 "method": "bdev_nvme_attach_controller", 00:24:17.358 "params": { 00:24:17.358 "name": "nvme0", 00:24:17.358 "trtype": "TCP", 00:24:17.358 "adrfam": "IPv4", 00:24:17.358 "traddr": "10.0.0.2", 00:24:17.358 "trsvcid": "4420", 00:24:17.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.358 "prchk_reftag": false, 00:24:17.358 "prchk_guard": false, 00:24:17.358 "ctrlr_loss_timeout_sec": 0, 00:24:17.358 "reconnect_delay_sec": 0, 00:24:17.358 "fast_io_fail_timeout_sec": 0, 00:24:17.358 "psk": "key0", 00:24:17.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:17.358 "hdgst": false, 00:24:17.358 "ddgst": false, 00:24:17.358 "multipath": "multipath" 00:24:17.358 } 00:24:17.358 }, 00:24:17.358 { 00:24:17.358 "method": "bdev_nvme_set_hotplug", 00:24:17.358 "params": { 00:24:17.358 "period_us": 100000, 00:24:17.358 "enable": false 00:24:17.358 } 00:24:17.358 }, 00:24:17.358 { 00:24:17.358 "method": "bdev_enable_histogram", 00:24:17.358 "params": { 00:24:17.358 "name": "nvme0n1", 00:24:17.358 "enable": true 00:24:17.358 } 00:24:17.358 }, 00:24:17.358 { 00:24:17.358 "method": "bdev_wait_for_examine" 00:24:17.358 } 00:24:17.358 ] 00:24:17.358 }, 00:24:17.358 { 00:24:17.358 "subsystem": "nbd", 00:24:17.358 "config": [] 00:24:17.358 } 00:24:17.358 ] 00:24:17.358 }' 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2862619 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2862619 ']' 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2862619 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2862619 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2862619' 00:24:17.358 killing process with pid 2862619 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2862619 00:24:17.358 Received shutdown signal, test time was about 1.000000 seconds 00:24:17.358 00:24:17.358 Latency(us) 00:24:17.358 [2024-11-20T05:34:37.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.358 [2024-11-20T05:34:37.637Z] =================================================================================================================== 00:24:17.358 [2024-11-20T05:34:37.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.358 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2862619 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2862286 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2862286 ']' 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2862286 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2862286 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2862286' 00:24:17.619 killing process with pid 2862286 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2862286 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2862286 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.619 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:17.619 "subsystems": [ 00:24:17.619 { 00:24:17.619 "subsystem": "keyring", 00:24:17.619 "config": [ 00:24:17.619 { 00:24:17.619 "method": "keyring_file_add_key", 00:24:17.619 "params": { 00:24:17.619 "name": "key0", 00:24:17.619 "path": "/tmp/tmp.ocnidqRYCY" 00:24:17.619 } 00:24:17.619 } 00:24:17.619 ] 00:24:17.619 }, 00:24:17.619 { 00:24:17.619 "subsystem": "iobuf", 00:24:17.619 "config": [ 00:24:17.619 { 00:24:17.619 "method": "iobuf_set_options", 00:24:17.619 "params": { 00:24:17.619 "small_pool_count": 8192, 00:24:17.619 "large_pool_count": 1024, 00:24:17.619 "small_bufsize": 8192, 00:24:17.619 "large_bufsize": 135168, 00:24:17.619 "enable_numa": false 00:24:17.619 } 00:24:17.619 } 00:24:17.619 ] 00:24:17.619 }, 00:24:17.619 { 00:24:17.619 "subsystem": "sock", 00:24:17.619 "config": [ 00:24:17.619 { 00:24:17.619 "method": "sock_set_default_impl", 00:24:17.619 "params": { 00:24:17.619 "impl_name": "posix" 00:24:17.619 } 00:24:17.619 }, 00:24:17.619 { 00:24:17.619 "method": "sock_impl_set_options", 00:24:17.619 "params": { 00:24:17.619 "impl_name": "ssl", 00:24:17.619 "recv_buf_size": 4096, 00:24:17.619 "send_buf_size": 4096, 00:24:17.619 "enable_recv_pipe": true, 00:24:17.619 "enable_quickack": false, 00:24:17.619 "enable_placement_id": 0, 00:24:17.619 "enable_zerocopy_send_server": true, 00:24:17.619 "enable_zerocopy_send_client": false, 00:24:17.619 "zerocopy_threshold": 0, 00:24:17.619 "tls_version": 0, 00:24:17.619 "enable_ktls": false 00:24:17.619 } 00:24:17.619 }, 00:24:17.619 { 00:24:17.619 "method": "sock_impl_set_options", 00:24:17.619 "params": { 00:24:17.619 "impl_name": "posix", 00:24:17.619 "recv_buf_size": 2097152, 00:24:17.619 "send_buf_size": 2097152, 00:24:17.619 "enable_recv_pipe": true, 00:24:17.620 "enable_quickack": false, 00:24:17.620 "enable_placement_id": 0, 00:24:17.620 "enable_zerocopy_send_server": true, 00:24:17.620 "enable_zerocopy_send_client": false, 00:24:17.620 "zerocopy_threshold": 0, 00:24:17.620 "tls_version": 0, 00:24:17.620 "enable_ktls": false 00:24:17.620 } 00:24:17.620 } 00:24:17.620 ] 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "subsystem": "vmd", 00:24:17.620 "config": [] 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "subsystem": "accel", 00:24:17.620 "config": [ 00:24:17.620 { 00:24:17.620 "method": "accel_set_options", 00:24:17.620 "params": { 00:24:17.620 "small_cache_size": 128, 00:24:17.620 "large_cache_size": 16, 00:24:17.620 "task_count": 2048, 00:24:17.620 "sequence_count": 2048, 00:24:17.620 "buf_count": 2048 00:24:17.620 } 00:24:17.620 } 00:24:17.620 ] 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "subsystem": "bdev", 00:24:17.620 "config": [ 00:24:17.620 { 00:24:17.620 "method": "bdev_set_options", 00:24:17.620 "params": { 00:24:17.620 "bdev_io_pool_size": 65535, 00:24:17.620 "bdev_io_cache_size": 256, 00:24:17.620 "bdev_auto_examine": true, 00:24:17.620 "iobuf_small_cache_size": 128, 00:24:17.620 "iobuf_large_cache_size": 16 00:24:17.620 } 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "method": "bdev_raid_set_options", 00:24:17.620 "params": { 00:24:17.620 "process_window_size_kb": 1024, 00:24:17.620 "process_max_bandwidth_mb_sec": 0 00:24:17.620 } 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "method": "bdev_iscsi_set_options", 00:24:17.620 "params": { 00:24:17.620 "timeout_sec": 30 00:24:17.620 } 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "method": "bdev_nvme_set_options", 00:24:17.620 "params": { 00:24:17.620 "action_on_timeout": "none", 00:24:17.620 "timeout_us": 0, 00:24:17.620 "timeout_admin_us": 0, 00:24:17.620 "keep_alive_timeout_ms": 10000, 00:24:17.620 "arbitration_burst": 0, 00:24:17.620 "low_priority_weight": 0, 00:24:17.620 "medium_priority_weight": 0, 00:24:17.620 "high_priority_weight": 0, 00:24:17.620 "nvme_adminq_poll_period_us": 10000, 00:24:17.620 "nvme_ioq_poll_period_us": 0, 00:24:17.620 "io_queue_requests": 0, 00:24:17.620 "delay_cmd_submit": true, 00:24:17.620 "transport_retry_count": 4, 00:24:17.620 "bdev_retry_count": 3, 00:24:17.620 "transport_ack_timeout": 0, 00:24:17.620 "ctrlr_loss_timeout_sec": 0, 00:24:17.620 "reconnect_delay_sec": 0, 00:24:17.620 "fast_io_fail_timeout_sec": 0, 00:24:17.620 "disable_auto_failback": false, 00:24:17.620 "generate_uuids": false, 00:24:17.620 "transport_tos": 0, 00:24:17.620 "nvme_error_stat": false, 00:24:17.620 "rdma_srq_size": 0, 00:24:17.620 "io_path_stat": false, 00:24:17.620 "allow_accel_sequence": false, 00:24:17.620 "rdma_max_cq_size": 0, 00:24:17.620 "rdma_cm_event_timeout_ms": 0, 00:24:17.620 "dhchap_digests": [ 00:24:17.620 "sha256", 00:24:17.620 "sha384", 00:24:17.620 "sha512" 00:24:17.620 ], 00:24:17.620 "dhchap_dhgroups": [ 00:24:17.620 "null", 00:24:17.620 "ffdhe2048", 00:24:17.620 "ffdhe3072", 00:24:17.620 "ffdhe4096", 00:24:17.620 "ffdhe6144", 00:24:17.620 "ffdhe8192" 00:24:17.620 ] 00:24:17.620 } 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "method": "bdev_nvme_set_hotplug", 00:24:17.620 "params": { 00:24:17.620 "period_us": 100000, 00:24:17.620 "enable": false 00:24:17.620 } 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "method": "bdev_malloc_create", 00:24:17.620 "params": { 00:24:17.620 "name": "malloc0", 00:24:17.620 "num_blocks": 8192, 00:24:17.620 "block_size": 4096, 00:24:17.620 "physical_block_size": 4096, 00:24:17.620 "uuid": "ddc80c19-697a-4d94-9215-5984d4fc474a", 00:24:17.620 "optimal_io_boundary": 0, 00:24:17.620 "md_size": 0, 00:24:17.620 "dif_type": 0, 00:24:17.620 "dif_is_head_of_md": false, 00:24:17.620 "dif_pi_format": 0 00:24:17.620 } 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "method": "bdev_wait_for_examine" 00:24:17.620 } 00:24:17.620 ] 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "subsystem": "nbd", 00:24:17.620 "config": [] 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "subsystem": "scheduler", 00:24:17.620 "config": [ 00:24:17.620 { 00:24:17.620 "method": "framework_set_scheduler", 00:24:17.620 "params": { 00:24:17.620 "name": "static" 00:24:17.620 } 00:24:17.620 } 00:24:17.620 ] 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "subsystem": "nvmf", 00:24:17.620 "config": [ 00:24:17.620 { 00:24:17.620 "method": "nvmf_set_config", 00:24:17.620 "params": { 00:24:17.620 "discovery_filter": "match_any", 00:24:17.620 "admin_cmd_passthru": { 00:24:17.620 "identify_ctrlr": false 00:24:17.620 }, 00:24:17.620 "dhchap_digests": [ 00:24:17.620 "sha256", 00:24:17.620 "sha384", 00:24:17.620 "sha512" 00:24:17.620 ], 00:24:17.620 "dhchap_dhgroups": [ 00:24:17.620 "null", 00:24:17.620 "ffdhe2048", 00:24:17.620 "ffdhe3072", 00:24:17.620 "ffdhe4096", 00:24:17.620 "ffdhe6144", 00:24:17.620 "ffdhe8192" 00:24:17.620 ] 00:24:17.620 } 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "method": "nvmf_set_max_subsystems", 00:24:17.620 "params": { 00:24:17.620 "max_subsystems": 1024 00:24:17.620 } 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "method": "nvmf_set_crdt", 00:24:17.620 "params": { 00:24:17.620 "crdt1": 0, 00:24:17.620 "crdt2": 0, 00:24:17.620 "crdt3": 0 00:24:17.620 } 00:24:17.620 }, 00:24:17.620 { 00:24:17.620 "method": "nvmf_create_transport", 00:24:17.620 "params": { 00:24:17.620 "trtype": "TCP", 00:24:17.620 "max_queue_depth": 128, 00:24:17.620 "max_io_qpairs_per_ctrlr": 127, 00:24:17.620 "in_capsule_data_size": 4096, 00:24:17.620 "max_io_size": 131072, 00:24:17.620 "io_unit_size": 131072, 00:24:17.620 "max_aq_depth": 128, 00:24:17.620 "num_shared_buffers": 511, 00:24:17.620 "buf_cache_size": 4294967295, 00:24:17.620 "dif_insert_or_strip": false, 00:24:17.620 "zcopy": false, 00:24:17.620 "c2h_success": false, 00:24:17.620 "sock_priority": 0, 00:24:17.620 "abort_timeout_sec": 1, 00:24:17.620 "ack_timeout": 0, 00:24:17.620 "data_wr_pool_size": 0 00:24:17.621 } 00:24:17.621 }, 00:24:17.621 { 00:24:17.621 "method": "nvmf_create_subsystem", 00:24:17.621 "params": { 00:24:17.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.621 "allow_any_host": false, 00:24:17.621 "serial_number": "00000000000000000000", 00:24:17.621 "model_number": "SPDK bdev Controller", 00:24:17.621 "max_namespaces": 32, 00:24:17.621 "min_cntlid": 1, 00:24:17.621 "max_cntlid": 65519, 00:24:17.621 "ana_reporting": false 00:24:17.621 } 00:24:17.621 }, 00:24:17.621 { 00:24:17.621 "method": "nvmf_subsystem_add_host", 00:24:17.621 "params": { 00:24:17.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.621 "host": "nqn.2016-06.io.spdk:host1", 00:24:17.621 "psk": "key0" 00:24:17.621 } 00:24:17.621 }, 00:24:17.621 { 00:24:17.621 "method": "nvmf_subsystem_add_ns", 00:24:17.621 "params": { 00:24:17.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.621 "namespace": { 00:24:17.621 "nsid": 1, 00:24:17.621 "bdev_name": "malloc0", 00:24:17.621 "nguid": "DDC80C19697A4D9492155984D4FC474A", 00:24:17.621 "uuid": "ddc80c19-697a-4d94-9215-5984d4fc474a", 00:24:17.621 "no_auto_visible": false 00:24:17.621 } 00:24:17.621 } 00:24:17.621 }, 00:24:17.621 { 00:24:17.621 "method": "nvmf_subsystem_add_listener", 00:24:17.621 "params": { 00:24:17.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.621 "listen_address": { 00:24:17.621 "trtype": "TCP", 00:24:17.621 "adrfam": "IPv4", 00:24:17.621 "traddr": "10.0.0.2", 00:24:17.621 "trsvcid": "4420" 00:24:17.621 }, 00:24:17.621 "secure_channel": false, 00:24:17.621 "sock_impl": "ssl" 00:24:17.621 } 00:24:17.621 } 00:24:17.621 ] 00:24:17.621 } 00:24:17.621 ] 00:24:17.621 }' 00:24:17.621 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.621 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2863266 00:24:17.621 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:17.621 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2863266 00:24:17.621 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2863266 ']' 00:24:17.621 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.621 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:17.621 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.621 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:17.621 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.882 [2024-11-20 06:34:37.908233] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:24:17.882 [2024-11-20 06:34:37.908293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.882 [2024-11-20 06:34:37.998154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.882 [2024-11-20 06:34:38.028273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.882 [2024-11-20 06:34:38.028302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.882 [2024-11-20 06:34:38.028308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.882 [2024-11-20 06:34:38.028313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.882 [2024-11-20 06:34:38.028317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.882 [2024-11-20 06:34:38.028797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.143 [2024-11-20 06:34:38.222677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.143 [2024-11-20 06:34:38.254710] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:18.143 [2024-11-20 06:34:38.254927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2863332 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2863332 /var/tmp/bdevperf.sock 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2863332 ']' 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.715 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:18.715 "subsystems": [ 00:24:18.715 { 00:24:18.715 "subsystem": "keyring", 00:24:18.715 "config": [ 00:24:18.715 { 00:24:18.715 "method": "keyring_file_add_key", 00:24:18.715 "params": { 00:24:18.715 "name": "key0", 00:24:18.715 "path": "/tmp/tmp.ocnidqRYCY" 00:24:18.715 } 00:24:18.715 } 00:24:18.715 ] 00:24:18.715 }, 00:24:18.715 { 00:24:18.715 "subsystem": "iobuf", 00:24:18.715 "config": [ 00:24:18.715 { 00:24:18.715 "method": "iobuf_set_options", 00:24:18.715 "params": { 00:24:18.715 "small_pool_count": 8192, 00:24:18.715 "large_pool_count": 1024, 00:24:18.715 "small_bufsize": 8192, 00:24:18.715 "large_bufsize": 135168, 00:24:18.715 "enable_numa": false 00:24:18.715 } 00:24:18.715 } 00:24:18.715 ] 00:24:18.715 }, 00:24:18.715 { 00:24:18.715 "subsystem": "sock", 00:24:18.715 "config": [ 00:24:18.715 { 00:24:18.715 "method": "sock_set_default_impl", 00:24:18.715 "params": { 00:24:18.715 "impl_name": "posix" 00:24:18.715 } 00:24:18.715 }, 00:24:18.715 { 00:24:18.715 "method": "sock_impl_set_options", 00:24:18.715 "params": { 00:24:18.715 "impl_name": "ssl", 00:24:18.715 "recv_buf_size": 4096, 00:24:18.715 "send_buf_size": 4096, 00:24:18.715 "enable_recv_pipe": true, 00:24:18.715 "enable_quickack": false, 00:24:18.715 "enable_placement_id": 0, 00:24:18.715 "enable_zerocopy_send_server": true, 00:24:18.715 "enable_zerocopy_send_client": false, 00:24:18.715 "zerocopy_threshold": 0, 00:24:18.715 "tls_version": 0, 00:24:18.715 "enable_ktls": false 00:24:18.715 } 00:24:18.715 }, 00:24:18.715 { 00:24:18.715 "method": "sock_impl_set_options", 00:24:18.715 "params": { 00:24:18.715 "impl_name": "posix", 00:24:18.715 "recv_buf_size": 2097152, 00:24:18.715 "send_buf_size": 2097152, 00:24:18.715 "enable_recv_pipe": true, 00:24:18.715 "enable_quickack": false, 00:24:18.715 "enable_placement_id": 0, 00:24:18.715 "enable_zerocopy_send_server": true, 00:24:18.715 "enable_zerocopy_send_client": false, 00:24:18.715 "zerocopy_threshold": 0, 00:24:18.715 "tls_version": 0, 00:24:18.715 "enable_ktls": false 00:24:18.715 } 00:24:18.715 } 00:24:18.715 ] 00:24:18.715 }, 00:24:18.715 { 00:24:18.715 "subsystem": "vmd", 00:24:18.715 "config": [] 00:24:18.715 }, 00:24:18.715 { 00:24:18.715 "subsystem": "accel", 00:24:18.715 "config": [ 00:24:18.715 { 00:24:18.715 "method": "accel_set_options", 00:24:18.715 "params": { 00:24:18.715 "small_cache_size": 128, 00:24:18.715 "large_cache_size": 16, 00:24:18.715 "task_count": 2048, 00:24:18.715 "sequence_count": 2048, 00:24:18.715 "buf_count": 2048 00:24:18.715 } 00:24:18.715 } 00:24:18.715 ] 00:24:18.715 }, 00:24:18.715 { 00:24:18.715 "subsystem": "bdev", 00:24:18.715 "config": [ 00:24:18.715 { 00:24:18.715 "method": "bdev_set_options", 00:24:18.715 "params": { 00:24:18.715 "bdev_io_pool_size": 65535, 00:24:18.715 "bdev_io_cache_size": 256, 00:24:18.715 "bdev_auto_examine": true, 00:24:18.715 "iobuf_small_cache_size": 128, 00:24:18.716 "iobuf_large_cache_size": 16 00:24:18.716 } 00:24:18.716 }, 00:24:18.716 { 00:24:18.716 "method": "bdev_raid_set_options", 00:24:18.716 "params": { 00:24:18.716 "process_window_size_kb": 1024, 00:24:18.716 "process_max_bandwidth_mb_sec": 0 00:24:18.716 } 00:24:18.716 }, 00:24:18.716 { 00:24:18.716 "method": "bdev_iscsi_set_options", 00:24:18.716 "params": { 00:24:18.716 "timeout_sec": 30 00:24:18.716 } 00:24:18.716 }, 00:24:18.716 { 00:24:18.716 "method": "bdev_nvme_set_options", 00:24:18.716 "params": { 00:24:18.716 "action_on_timeout": "none", 00:24:18.716 "timeout_us": 0, 00:24:18.716 "timeout_admin_us": 0, 00:24:18.716 "keep_alive_timeout_ms": 10000, 00:24:18.716 "arbitration_burst": 0, 00:24:18.716 "low_priority_weight": 0, 00:24:18.716 "medium_priority_weight": 0, 00:24:18.716 "high_priority_weight": 0, 00:24:18.716 "nvme_adminq_poll_period_us": 10000, 00:24:18.716 "nvme_ioq_poll_period_us": 0, 00:24:18.716 "io_queue_requests": 512, 00:24:18.716 "delay_cmd_submit": true, 00:24:18.716 "transport_retry_count": 4, 00:24:18.716 "bdev_retry_count": 3, 00:24:18.716 "transport_ack_timeout": 0, 00:24:18.716 "ctrlr_loss_timeout_sec": 0, 00:24:18.716 "reconnect_delay_sec": 0, 00:24:18.716 "fast_io_fail_timeout_sec": 0, 00:24:18.716 "disable_auto_failback": false, 00:24:18.716 "generate_uuids": false, 00:24:18.716 "transport_tos": 0, 00:24:18.716 "nvme_error_stat": false, 00:24:18.716 "rdma_srq_size": 0, 00:24:18.716 "io_path_stat": false, 00:24:18.716 "allow_accel_sequence": false, 00:24:18.716 "rdma_max_cq_size": 0, 00:24:18.716 "rdma_cm_event_timeout_ms": 0, 00:24:18.716 "dhchap_digests": [ 00:24:18.716 "sha256", 00:24:18.716 "sha384", 00:24:18.716 "sha512" 00:24:18.716 ], 00:24:18.716 "dhchap_dhgroups": [ 00:24:18.716 "null", 00:24:18.716 "ffdhe2048", 00:24:18.716 "ffdhe3072", 00:24:18.716 "ffdhe4096", 00:24:18.716 "ffdhe6144", 00:24:18.716 "ffdhe8192" 00:24:18.716 ] 00:24:18.716 } 00:24:18.716 }, 00:24:18.716 { 00:24:18.716 "method": "bdev_nvme_attach_controller", 00:24:18.716 "params": { 00:24:18.716 "name": "nvme0", 00:24:18.716 "trtype": "TCP", 00:24:18.716 "adrfam": "IPv4", 00:24:18.716 "traddr": "10.0.0.2", 00:24:18.716 "trsvcid": "4420", 00:24:18.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.716 "prchk_reftag": false, 00:24:18.716 "prchk_guard": false, 00:24:18.716 "ctrlr_loss_timeout_sec": 0, 00:24:18.716 "reconnect_delay_sec": 0, 00:24:18.716 "fast_io_fail_timeout_sec": 0, 00:24:18.716 "psk": "key0", 00:24:18.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.716 "hdgst": false, 00:24:18.716 "ddgst": false, 00:24:18.716 "multipath": "multipath" 00:24:18.716 } 00:24:18.716 }, 00:24:18.716 { 00:24:18.716 "method": "bdev_nvme_set_hotplug", 00:24:18.716 "params": { 00:24:18.716 "period_us": 100000, 00:24:18.716 "enable": false 00:24:18.716 } 00:24:18.716 }, 00:24:18.716 { 00:24:18.716 "method": "bdev_enable_histogram", 00:24:18.716 "params": { 00:24:18.716 "name": "nvme0n1", 00:24:18.716 "enable": true 00:24:18.716 } 00:24:18.716 }, 00:24:18.716 { 00:24:18.716 "method": "bdev_wait_for_examine" 00:24:18.716 } 00:24:18.716 ] 00:24:18.716 }, 00:24:18.716 { 00:24:18.716 "subsystem": "nbd", 00:24:18.716 "config": [] 00:24:18.716 } 00:24:18.716 ] 00:24:18.716 }' 00:24:18.716 [2024-11-20 06:34:38.785870] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:24:18.716 [2024-11-20 06:34:38.785921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863332 ] 00:24:18.716 [2024-11-20 06:34:38.869686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.716 [2024-11-20 06:34:38.899550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.976 [2024-11-20 06:34:39.035536] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:19.547 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.547 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:19.547 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:19.547 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:19.547 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.547 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:19.807 Running I/O for 1 seconds... 00:24:20.749 5277.00 IOPS, 20.61 MiB/s 00:24:20.749 Latency(us) 00:24:20.749 [2024-11-20T05:34:41.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.749 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:20.749 Verification LBA range: start 0x0 length 0x2000 00:24:20.749 nvme0n1 : 1.02 5301.05 20.71 0.00 0.00 23955.95 5652.48 24466.77 00:24:20.749 [2024-11-20T05:34:41.028Z] =================================================================================================================== 00:24:20.749 [2024-11-20T05:34:41.028Z] Total : 5301.05 20.71 0.00 0.00 23955.95 5652.48 24466.77 00:24:20.749 { 00:24:20.749 "results": [ 00:24:20.749 { 00:24:20.749 "job": "nvme0n1", 00:24:20.749 "core_mask": "0x2", 00:24:20.749 "workload": "verify", 00:24:20.749 "status": "finished", 00:24:20.749 "verify_range": { 00:24:20.749 "start": 0, 00:24:20.749 "length": 8192 00:24:20.749 }, 00:24:20.749 "queue_depth": 128, 00:24:20.749 "io_size": 4096, 00:24:20.749 "runtime": 1.019798, 00:24:20.749 "iops": 5301.049815747824, 00:24:20.749 "mibps": 20.707225842764938, 00:24:20.749 "io_failed": 0, 00:24:20.749 "io_timeout": 0, 00:24:20.749 "avg_latency_us": 23955.945206560613, 00:24:20.749 "min_latency_us": 5652.48, 00:24:20.749 "max_latency_us": 24466.773333333334 00:24:20.749 } 00:24:20.749 ], 00:24:20.749 "core_count": 1 00:24:20.749 } 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:20.749 nvmf_trace.0 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2863332 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2863332 ']' 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2863332 00:24:20.749 06:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:20.749 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.749 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2863332 00:24:21.009 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:21.009 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:21.009 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2863332' 00:24:21.009 killing process with pid 2863332 00:24:21.009 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2863332 00:24:21.009 Received shutdown signal, test time was about 1.000000 seconds 00:24:21.009 00:24:21.009 Latency(us) 00:24:21.009 [2024-11-20T05:34:41.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.009 [2024-11-20T05:34:41.288Z] =================================================================================================================== 00:24:21.009 [2024-11-20T05:34:41.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.009 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2863332 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.010 rmmod nvme_tcp 00:24:21.010 rmmod nvme_fabrics 00:24:21.010 rmmod nvme_keyring 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2863266 ']' 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2863266 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2863266 ']' 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2863266 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:21.010 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2863266 00:24:21.270 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2863266' 00:24:21.271 killing process with pid 2863266 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2863266 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2863266 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.271 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.nRFf1hWNzp /tmp/tmp.EcTtccluBk /tmp/tmp.ocnidqRYCY 00:24:23.815 00:24:23.815 real 1m26.874s 00:24:23.815 user 2m17.108s 00:24:23.815 sys 0m26.762s 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.815 ************************************ 00:24:23.815 END TEST nvmf_tls 00:24:23.815 ************************************ 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:23.815 ************************************ 00:24:23.815 START TEST nvmf_fips 00:24:23.815 ************************************ 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:23.815 * Looking for test storage... 00:24:23.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:23.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.815 --rc genhtml_branch_coverage=1 00:24:23.815 --rc genhtml_function_coverage=1 00:24:23.815 --rc genhtml_legend=1 00:24:23.815 --rc geninfo_all_blocks=1 00:24:23.815 --rc geninfo_unexecuted_blocks=1 00:24:23.815 00:24:23.815 ' 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:23.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.815 --rc genhtml_branch_coverage=1 00:24:23.815 --rc genhtml_function_coverage=1 00:24:23.815 --rc genhtml_legend=1 00:24:23.815 --rc geninfo_all_blocks=1 00:24:23.815 --rc geninfo_unexecuted_blocks=1 00:24:23.815 00:24:23.815 ' 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:23.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.815 --rc genhtml_branch_coverage=1 00:24:23.815 --rc genhtml_function_coverage=1 00:24:23.815 --rc genhtml_legend=1 00:24:23.815 --rc geninfo_all_blocks=1 00:24:23.815 --rc geninfo_unexecuted_blocks=1 00:24:23.815 00:24:23.815 ' 00:24:23.815 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:23.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.816 --rc genhtml_branch_coverage=1 00:24:23.816 --rc genhtml_function_coverage=1 00:24:23.816 --rc genhtml_legend=1 00:24:23.816 --rc geninfo_all_blocks=1 00:24:23.816 --rc geninfo_unexecuted_blocks=1 00:24:23.816 00:24:23.816 ' 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:23.816 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:23.817 Error setting digest 00:24:23.817 40F22650B17F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:23.817 40F22650B17F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.817 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.817 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.817 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.817 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:31.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:31.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:31.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:31.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.957 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:24:31.958 00:24:31.958 --- 10.0.0.2 ping statistics --- 00:24:31.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.958 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:24:31.958 00:24:31.958 --- 10.0.0.1 ping statistics --- 00:24:31.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.958 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2868048 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2868048 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2868048 ']' 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:31.958 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:31.958 [2024-11-20 06:34:51.572429] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:24:31.958 [2024-11-20 06:34:51.572507] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.958 [2024-11-20 06:34:51.673702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.958 [2024-11-20 06:34:51.724849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.958 [2024-11-20 06:34:51.724903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.958 [2024-11-20 06:34:51.724912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.958 [2024-11-20 06:34:51.724919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.958 [2024-11-20 06:34:51.724925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.958 [2024-11-20 06:34:51.725722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Lq2 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Lq2 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Lq2 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Lq2 00:24:32.219 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:32.479 [2024-11-20 06:34:52.568941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.479 [2024-11-20 06:34:52.584947] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:32.479 [2024-11-20 06:34:52.585224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.479 malloc0 00:24:32.479 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.479 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2868385 00:24:32.479 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2868385 /var/tmp/bdevperf.sock 00:24:32.480 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:32.480 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2868385 ']' 00:24:32.480 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.480 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:32.480 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.480 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:32.480 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:32.480 [2024-11-20 06:34:52.729545] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:24:32.480 [2024-11-20 06:34:52.729624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868385 ] 00:24:32.740 [2024-11-20 06:34:52.824560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.740 [2024-11-20 06:34:52.875428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.312 06:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:33.312 06:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:33.312 06:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Lq2 00:24:33.573 06:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:33.833 [2024-11-20 06:34:53.886789] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.833 TLSTESTn1 00:24:33.833 06:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.833 Running I/O for 10 seconds... 00:24:36.161 4880.00 IOPS, 19.06 MiB/s [2024-11-20T05:34:57.384Z] 5602.00 IOPS, 21.88 MiB/s [2024-11-20T05:34:58.323Z] 5426.33 IOPS, 21.20 MiB/s [2024-11-20T05:34:59.264Z] 5566.75 IOPS, 21.75 MiB/s [2024-11-20T05:35:00.204Z] 5499.20 IOPS, 21.48 MiB/s [2024-11-20T05:35:01.145Z] 5648.00 IOPS, 22.06 MiB/s [2024-11-20T05:35:02.529Z] 5552.71 IOPS, 21.69 MiB/s [2024-11-20T05:35:03.101Z] 5498.38 IOPS, 21.48 MiB/s [2024-11-20T05:35:04.484Z] 5537.78 IOPS, 21.63 MiB/s [2024-11-20T05:35:04.484Z] 5599.80 IOPS, 21.87 MiB/s 00:24:44.205 Latency(us) 00:24:44.205 [2024-11-20T05:35:04.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.205 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:44.205 Verification LBA range: start 0x0 length 0x2000 00:24:44.205 TLSTESTn1 : 10.04 5589.76 21.84 0.00 0.00 22838.85 6116.69 79953.92 00:24:44.205 [2024-11-20T05:35:04.484Z] =================================================================================================================== 00:24:44.205 [2024-11-20T05:35:04.484Z] Total : 5589.76 21.84 0.00 0.00 22838.85 6116.69 79953.92 00:24:44.205 { 00:24:44.205 "results": [ 00:24:44.205 { 00:24:44.205 "job": "TLSTESTn1", 00:24:44.205 "core_mask": "0x4", 00:24:44.205 "workload": "verify", 00:24:44.205 "status": "finished", 00:24:44.205 "verify_range": { 00:24:44.205 "start": 0, 00:24:44.205 "length": 8192 00:24:44.205 }, 00:24:44.205 "queue_depth": 128, 00:24:44.205 "io_size": 4096, 00:24:44.205 "runtime": 10.040858, 00:24:44.205 "iops": 5589.761353063653, 00:24:44.205 "mibps": 21.835005285404893, 00:24:44.205 "io_failed": 0, 00:24:44.205 "io_timeout": 0, 00:24:44.205 "avg_latency_us": 22838.848932758436, 00:24:44.205 "min_latency_us": 6116.693333333334, 00:24:44.205 "max_latency_us": 79953.92 00:24:44.205 } 00:24:44.205 ], 00:24:44.205 "core_count": 1 00:24:44.205 } 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:44.206 nvmf_trace.0 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2868385 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2868385 ']' 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2868385 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2868385 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2868385' 00:24:44.206 killing process with pid 2868385 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2868385 00:24:44.206 Received shutdown signal, test time was about 10.000000 seconds 00:24:44.206 00:24:44.206 Latency(us) 00:24:44.206 [2024-11-20T05:35:04.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.206 [2024-11-20T05:35:04.485Z] =================================================================================================================== 00:24:44.206 [2024-11-20T05:35:04.485Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2868385 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:44.206 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:44.206 rmmod nvme_tcp 00:24:44.206 rmmod nvme_fabrics 00:24:44.206 rmmod nvme_keyring 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2868048 ']' 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2868048 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2868048 ']' 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2868048 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2868048 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2868048' 00:24:44.467 killing process with pid 2868048 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2868048 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2868048 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.467 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Lq2 00:24:47.016 00:24:47.016 real 0m23.198s 00:24:47.016 user 0m24.969s 00:24:47.016 sys 0m9.601s 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.016 ************************************ 00:24:47.016 END TEST nvmf_fips 00:24:47.016 ************************************ 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:47.016 ************************************ 00:24:47.016 START TEST nvmf_control_msg_list 00:24:47.016 ************************************ 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:47.016 * Looking for test storage... 00:24:47.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:47.016 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.016 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:47.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.016 --rc genhtml_branch_coverage=1 00:24:47.017 --rc genhtml_function_coverage=1 00:24:47.017 --rc genhtml_legend=1 00:24:47.017 --rc geninfo_all_blocks=1 00:24:47.017 --rc geninfo_unexecuted_blocks=1 00:24:47.017 00:24:47.017 ' 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.017 --rc genhtml_branch_coverage=1 00:24:47.017 --rc genhtml_function_coverage=1 00:24:47.017 --rc genhtml_legend=1 00:24:47.017 --rc geninfo_all_blocks=1 00:24:47.017 --rc geninfo_unexecuted_blocks=1 00:24:47.017 00:24:47.017 ' 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.017 --rc genhtml_branch_coverage=1 00:24:47.017 --rc genhtml_function_coverage=1 00:24:47.017 --rc genhtml_legend=1 00:24:47.017 --rc geninfo_all_blocks=1 00:24:47.017 --rc geninfo_unexecuted_blocks=1 00:24:47.017 00:24:47.017 ' 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.017 --rc genhtml_branch_coverage=1 00:24:47.017 --rc genhtml_function_coverage=1 00:24:47.017 --rc genhtml_legend=1 00:24:47.017 --rc geninfo_all_blocks=1 00:24:47.017 --rc geninfo_unexecuted_blocks=1 00:24:47.017 00:24:47.017 ' 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:47.017 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:55.158 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:55.158 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.158 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:55.159 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:55.159 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:55.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:24:55.159 00:24:55.159 --- 10.0.0.2 ping statistics --- 00:24:55.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.159 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:24:55.159 00:24:55.159 --- 10.0.0.1 ping statistics --- 00:24:55.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.159 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2874784 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2874784 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 2874784 ']' 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:55.159 [2024-11-20 06:35:14.689342] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:24:55.159 [2024-11-20 06:35:14.689410] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.159 [2024-11-20 06:35:14.762574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.159 [2024-11-20 06:35:14.808774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.159 [2024-11-20 06:35:14.808820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.159 [2024-11-20 06:35:14.808827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.159 [2024-11-20 06:35:14.808832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.159 [2024-11-20 06:35:14.808837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.159 [2024-11-20 06:35:14.809468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:55.159 [2024-11-20 06:35:14.974505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.159 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:55.160 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.160 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:55.160 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.160 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:55.160 Malloc0 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:55.160 [2024-11-20 06:35:15.028228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2874983 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2874985 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2874987 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2874983 00:24:55.160 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.160 [2024-11-20 06:35:15.139118] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:55.160 [2024-11-20 06:35:15.139470] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:55.160 [2024-11-20 06:35:15.139779] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:56.102 Initializing NVMe Controllers 00:24:56.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:56.102 Initialization complete. Launching workers. 00:24:56.102 ======================================================== 00:24:56.102 Latency(us) 00:24:56.102 Device Information : IOPS MiB/s Average min max 00:24:56.102 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2303.00 9.00 433.96 132.07 619.85 00:24:56.102 ======================================================== 00:24:56.102 Total : 2303.00 9.00 433.96 132.07 619.85 00:24:56.102 00:24:56.102 Initializing NVMe Controllers 00:24:56.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:56.102 Initialization complete. Launching workers. 00:24:56.102 ======================================================== 00:24:56.102 Latency(us) 00:24:56.102 Device Information : IOPS MiB/s Average min max 00:24:56.102 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 24.00 0.09 41866.83 40925.10 42040.62 00:24:56.102 ======================================================== 00:24:56.102 Total : 24.00 0.09 41866.83 40925.10 42040.62 00:24:56.102 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2874985 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2874987 00:24:56.102 Initializing NVMe Controllers 00:24:56.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:56.102 Initialization complete. Launching workers. 00:24:56.102 ======================================================== 00:24:56.102 Latency(us) 00:24:56.102 Device Information : IOPS MiB/s Average min max 00:24:56.102 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40915.98 40712.09 41116.68 00:24:56.102 ======================================================== 00:24:56.102 Total : 25.00 0.10 40915.98 40712.09 41116.68 00:24:56.102 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.102 rmmod nvme_tcp 00:24:56.102 rmmod nvme_fabrics 00:24:56.102 rmmod nvme_keyring 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2874784 ']' 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2874784 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 2874784 ']' 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 2874784 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:56.102 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2874784 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2874784' 00:24:56.364 killing process with pid 2874784 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 2874784 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 2874784 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.364 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:58.967 00:24:58.967 real 0m11.807s 00:24:58.967 user 0m7.229s 00:24:58.967 sys 0m6.502s 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.967 ************************************ 00:24:58.967 END TEST nvmf_control_msg_list 00:24:58.967 ************************************ 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:58.967 ************************************ 00:24:58.967 START TEST nvmf_wait_for_buf 00:24:58.967 ************************************ 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:58.967 * Looking for test storage... 00:24:58.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:58.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.967 --rc genhtml_branch_coverage=1 00:24:58.967 --rc genhtml_function_coverage=1 00:24:58.967 --rc genhtml_legend=1 00:24:58.967 --rc geninfo_all_blocks=1 00:24:58.967 --rc geninfo_unexecuted_blocks=1 00:24:58.967 00:24:58.967 ' 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:58.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.967 --rc genhtml_branch_coverage=1 00:24:58.967 --rc genhtml_function_coverage=1 00:24:58.967 --rc genhtml_legend=1 00:24:58.967 --rc geninfo_all_blocks=1 00:24:58.967 --rc geninfo_unexecuted_blocks=1 00:24:58.967 00:24:58.967 ' 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:58.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.967 --rc genhtml_branch_coverage=1 00:24:58.967 --rc genhtml_function_coverage=1 00:24:58.967 --rc genhtml_legend=1 00:24:58.967 --rc geninfo_all_blocks=1 00:24:58.967 --rc geninfo_unexecuted_blocks=1 00:24:58.967 00:24:58.967 ' 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:58.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.967 --rc genhtml_branch_coverage=1 00:24:58.967 --rc genhtml_function_coverage=1 00:24:58.967 --rc genhtml_legend=1 00:24:58.967 --rc geninfo_all_blocks=1 00:24:58.967 --rc geninfo_unexecuted_blocks=1 00:24:58.967 00:24:58.967 ' 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.967 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:58.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:58.968 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:07.192 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.192 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:07.192 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:07.193 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:07.193 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:07.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:25:07.193 00:25:07.193 --- 10.0.0.2 ping statistics --- 00:25:07.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.193 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:25:07.193 00:25:07.193 --- 10.0.0.1 ping statistics --- 00:25:07.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.193 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2879424 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2879424 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 2879424 ']' 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:07.193 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.193 [2024-11-20 06:35:26.569325] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:25:07.193 [2024-11-20 06:35:26.569390] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.193 [2024-11-20 06:35:26.670037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.193 [2024-11-20 06:35:26.720439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.193 [2024-11-20 06:35:26.720491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.193 [2024-11-20 06:35:26.720500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.193 [2024-11-20 06:35:26.720507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.193 [2024-11-20 06:35:26.720513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.193 [2024-11-20 06:35:26.721323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.193 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:07.193 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:25:07.193 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.194 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.462 Malloc0 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.462 [2024-11-20 06:35:27.547324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.462 [2024-11-20 06:35:27.583627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.462 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.462 [2024-11-20 06:35:27.689280] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.851 Initializing NVMe Controllers 00:25:08.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:08.851 Initialization complete. Launching workers. 00:25:08.851 ======================================================== 00:25:08.851 Latency(us) 00:25:08.851 Device Information : IOPS MiB/s Average min max 00:25:08.851 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32264.26 8012.98 63856.15 00:25:08.851 ======================================================== 00:25:08.851 Total : 129.00 16.12 32264.26 8012.98 63856.15 00:25:08.851 00:25:08.851 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:08.851 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.851 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:08.851 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.851 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.111 rmmod nvme_tcp 00:25:09.111 rmmod nvme_fabrics 00:25:09.111 rmmod nvme_keyring 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2879424 ']' 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2879424 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 2879424 ']' 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 2879424 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2879424 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2879424' 00:25:09.111 killing process with pid 2879424 00:25:09.111 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 2879424 00:25:09.112 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 2879424 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.372 06:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.287 06:35:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.287 00:25:11.287 real 0m12.782s 00:25:11.287 user 0m5.231s 00:25:11.287 sys 0m6.142s 00:25:11.287 06:35:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:11.287 06:35:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.287 ************************************ 00:25:11.287 END TEST nvmf_wait_for_buf 00:25:11.287 ************************************ 00:25:11.548 06:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:11.548 06:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:11.548 06:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:11.548 06:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:11.548 06:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.548 06:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:19.690 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:19.690 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:19.690 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:19.690 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:19.690 ************************************ 00:25:19.690 START TEST nvmf_perf_adq 00:25:19.690 ************************************ 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:19.690 * Looking for test storage... 00:25:19.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.690 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:19.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.691 --rc genhtml_branch_coverage=1 00:25:19.691 --rc genhtml_function_coverage=1 00:25:19.691 --rc genhtml_legend=1 00:25:19.691 --rc geninfo_all_blocks=1 00:25:19.691 --rc geninfo_unexecuted_blocks=1 00:25:19.691 00:25:19.691 ' 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:19.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.691 --rc genhtml_branch_coverage=1 00:25:19.691 --rc genhtml_function_coverage=1 00:25:19.691 --rc genhtml_legend=1 00:25:19.691 --rc geninfo_all_blocks=1 00:25:19.691 --rc geninfo_unexecuted_blocks=1 00:25:19.691 00:25:19.691 ' 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:19.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.691 --rc genhtml_branch_coverage=1 00:25:19.691 --rc genhtml_function_coverage=1 00:25:19.691 --rc genhtml_legend=1 00:25:19.691 --rc geninfo_all_blocks=1 00:25:19.691 --rc geninfo_unexecuted_blocks=1 00:25:19.691 00:25:19.691 ' 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:19.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.691 --rc genhtml_branch_coverage=1 00:25:19.691 --rc genhtml_function_coverage=1 00:25:19.691 --rc genhtml_legend=1 00:25:19.691 --rc geninfo_all_blocks=1 00:25:19.691 --rc geninfo_unexecuted_blocks=1 00:25:19.691 00:25:19.691 ' 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:19.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:19.691 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:26.276 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:26.276 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.276 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:26.277 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:26.277 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:26.277 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:27.745 06:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:29.655 06:35:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:34.949 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:34.949 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.949 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:34.950 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:34.950 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.950 06:35:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:34.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:25:34.950 00:25:34.950 --- 10.0.0.2 ping statistics --- 00:25:34.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.950 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:25:34.950 00:25:34.950 --- 10.0.0.1 ping statistics --- 00:25:34.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.950 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:34.950 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2889665 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2889665 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2889665 ']' 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:35.211 06:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:35.211 [2024-11-20 06:35:55.304510] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:25:35.211 [2024-11-20 06:35:55.304573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.211 [2024-11-20 06:35:55.404687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:35.211 [2024-11-20 06:35:55.459911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.212 [2024-11-20 06:35:55.459969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.212 [2024-11-20 06:35:55.459982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.212 [2024-11-20 06:35:55.459990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.212 [2024-11-20 06:35:55.459996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.212 [2024-11-20 06:35:55.462028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.212 [2024-11-20 06:35:55.462206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.212 [2024-11-20 06:35:55.462321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.212 [2024-11-20 06:35:55.462322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:36.155 [2024-11-20 06:35:56.332260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:36.155 Malloc1 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:36.155 [2024-11-20 06:35:56.411895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2889879 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:25:36.155 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:38.730 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:25:38.730 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.730 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:38.730 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.730 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:25:38.730 "tick_rate": 2400000000, 00:25:38.730 "poll_groups": [ 00:25:38.730 { 00:25:38.730 "name": "nvmf_tgt_poll_group_000", 00:25:38.730 "admin_qpairs": 1, 00:25:38.730 "io_qpairs": 1, 00:25:38.730 "current_admin_qpairs": 1, 00:25:38.730 "current_io_qpairs": 1, 00:25:38.730 "pending_bdev_io": 0, 00:25:38.730 "completed_nvme_io": 17322, 00:25:38.730 "transports": [ 00:25:38.730 { 00:25:38.730 "trtype": "TCP" 00:25:38.730 } 00:25:38.730 ] 00:25:38.730 }, 00:25:38.730 { 00:25:38.730 "name": "nvmf_tgt_poll_group_001", 00:25:38.730 "admin_qpairs": 0, 00:25:38.730 "io_qpairs": 1, 00:25:38.730 "current_admin_qpairs": 0, 00:25:38.730 "current_io_qpairs": 1, 00:25:38.730 "pending_bdev_io": 0, 00:25:38.730 "completed_nvme_io": 20134, 00:25:38.730 "transports": [ 00:25:38.730 { 00:25:38.730 "trtype": "TCP" 00:25:38.730 } 00:25:38.730 ] 00:25:38.730 }, 00:25:38.730 { 00:25:38.730 "name": "nvmf_tgt_poll_group_002", 00:25:38.730 "admin_qpairs": 0, 00:25:38.730 "io_qpairs": 1, 00:25:38.730 "current_admin_qpairs": 0, 00:25:38.730 "current_io_qpairs": 1, 00:25:38.730 "pending_bdev_io": 0, 00:25:38.730 "completed_nvme_io": 20380, 00:25:38.730 "transports": [ 00:25:38.730 { 00:25:38.730 "trtype": "TCP" 00:25:38.730 } 00:25:38.730 ] 00:25:38.730 }, 00:25:38.730 { 00:25:38.730 "name": "nvmf_tgt_poll_group_003", 00:25:38.730 "admin_qpairs": 0, 00:25:38.730 "io_qpairs": 1, 00:25:38.730 "current_admin_qpairs": 0, 00:25:38.730 "current_io_qpairs": 1, 00:25:38.730 "pending_bdev_io": 0, 00:25:38.730 "completed_nvme_io": 16955, 00:25:38.730 "transports": [ 00:25:38.730 { 00:25:38.730 "trtype": "TCP" 00:25:38.730 } 00:25:38.730 ] 00:25:38.730 } 00:25:38.730 ] 00:25:38.730 }' 00:25:38.730 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:38.730 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:25:38.730 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:25:38.730 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:25:38.730 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2889879 00:25:46.861 Initializing NVMe Controllers 00:25:46.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:46.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:46.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:46.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:46.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:46.861 Initialization complete. Launching workers. 00:25:46.861 ======================================================== 00:25:46.861 Latency(us) 00:25:46.861 Device Information : IOPS MiB/s Average min max 00:25:46.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12954.58 50.60 4940.04 1503.70 11557.97 00:25:46.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13854.33 54.12 4619.96 1633.41 13054.04 00:25:46.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14231.91 55.59 4496.53 1331.66 12357.24 00:25:46.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12277.02 47.96 5212.35 1348.18 13035.52 00:25:46.861 ======================================================== 00:25:46.861 Total : 53317.85 208.27 4801.19 1331.66 13054.04 00:25:46.861 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:46.861 rmmod nvme_tcp 00:25:46.861 rmmod nvme_fabrics 00:25:46.861 rmmod nvme_keyring 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2889665 ']' 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2889665 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2889665 ']' 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2889665 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2889665 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2889665' 00:25:46.861 killing process with pid 2889665 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2889665 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2889665 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.861 06:36:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.770 06:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:48.770 06:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:25:48.770 06:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:48.770 06:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:50.684 06:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:52.599 06:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:57.893 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:57.893 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:57.893 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.893 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:57.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.894 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:57.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:25:57.894 00:25:57.894 --- 10.0.0.2 ping statistics --- 00:25:57.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.894 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:25:57.894 00:25:57.894 --- 10.0.0.1 ping statistics --- 00:25:57.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.894 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:57.894 net.core.busy_poll = 1 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:57.894 net.core.busy_read = 1 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:57.894 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2895043 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2895043 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2895043 ']' 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:58.155 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:58.418 [2024-11-20 06:36:18.438271] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:25:58.418 [2024-11-20 06:36:18.438341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.418 [2024-11-20 06:36:18.538646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.418 [2024-11-20 06:36:18.591581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.418 [2024-11-20 06:36:18.591634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.418 [2024-11-20 06:36:18.591643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.418 [2024-11-20 06:36:18.591650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.418 [2024-11-20 06:36:18.591656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.418 [2024-11-20 06:36:18.594118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.418 [2024-11-20 06:36:18.594282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.418 [2024-11-20 06:36:18.594331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.418 [2024-11-20 06:36:18.594330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.992 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:58.992 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:25:58.992 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.992 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:58.992 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.254 [2024-11-20 06:36:19.460738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.254 Malloc1 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.254 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.516 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.516 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.516 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.516 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:59.516 [2024-11-20 06:36:19.540246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.516 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.516 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2895392 00:25:59.516 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:25:59.516 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:01.431 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:26:01.431 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.431 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:01.431 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.431 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:26:01.431 "tick_rate": 2400000000, 00:26:01.431 "poll_groups": [ 00:26:01.431 { 00:26:01.431 "name": "nvmf_tgt_poll_group_000", 00:26:01.431 "admin_qpairs": 1, 00:26:01.431 "io_qpairs": 2, 00:26:01.431 "current_admin_qpairs": 1, 00:26:01.431 "current_io_qpairs": 2, 00:26:01.431 "pending_bdev_io": 0, 00:26:01.431 "completed_nvme_io": 27794, 00:26:01.431 "transports": [ 00:26:01.431 { 00:26:01.431 "trtype": "TCP" 00:26:01.431 } 00:26:01.431 ] 00:26:01.431 }, 00:26:01.431 { 00:26:01.431 "name": "nvmf_tgt_poll_group_001", 00:26:01.431 "admin_qpairs": 0, 00:26:01.431 "io_qpairs": 2, 00:26:01.431 "current_admin_qpairs": 0, 00:26:01.431 "current_io_qpairs": 2, 00:26:01.431 "pending_bdev_io": 0, 00:26:01.431 "completed_nvme_io": 29693, 00:26:01.431 "transports": [ 00:26:01.431 { 00:26:01.431 "trtype": "TCP" 00:26:01.431 } 00:26:01.431 ] 00:26:01.431 }, 00:26:01.431 { 00:26:01.431 "name": "nvmf_tgt_poll_group_002", 00:26:01.431 "admin_qpairs": 0, 00:26:01.431 "io_qpairs": 0, 00:26:01.431 "current_admin_qpairs": 0, 00:26:01.431 "current_io_qpairs": 0, 00:26:01.431 "pending_bdev_io": 0, 00:26:01.431 "completed_nvme_io": 0, 00:26:01.431 "transports": [ 00:26:01.431 { 00:26:01.431 "trtype": "TCP" 00:26:01.431 } 00:26:01.431 ] 00:26:01.431 }, 00:26:01.431 { 00:26:01.431 "name": "nvmf_tgt_poll_group_003", 00:26:01.431 "admin_qpairs": 0, 00:26:01.431 "io_qpairs": 0, 00:26:01.431 "current_admin_qpairs": 0, 00:26:01.431 "current_io_qpairs": 0, 00:26:01.431 "pending_bdev_io": 0, 00:26:01.431 "completed_nvme_io": 0, 00:26:01.431 "transports": [ 00:26:01.431 { 00:26:01.431 "trtype": "TCP" 00:26:01.431 } 00:26:01.431 ] 00:26:01.431 } 00:26:01.431 ] 00:26:01.431 }' 00:26:01.431 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:01.431 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:26:01.431 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:26:01.431 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:26:01.431 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2895392 00:26:09.571 Initializing NVMe Controllers 00:26:09.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:09.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:09.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:09.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:09.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:09.571 Initialization complete. Launching workers. 00:26:09.571 ======================================================== 00:26:09.571 Latency(us) 00:26:09.571 Device Information : IOPS MiB/s Average min max 00:26:09.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9055.70 35.37 7069.97 997.10 52482.46 00:26:09.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9873.80 38.57 6483.72 1148.75 52857.36 00:26:09.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9080.10 35.47 7050.73 1074.74 54602.71 00:26:09.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9252.50 36.14 6918.25 981.31 57664.42 00:26:09.571 ======================================================== 00:26:09.571 Total : 37262.09 145.56 6872.26 981.31 57664.42 00:26:09.571 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:09.571 rmmod nvme_tcp 00:26:09.571 rmmod nvme_fabrics 00:26:09.571 rmmod nvme_keyring 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2895043 ']' 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2895043 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2895043 ']' 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2895043 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2895043 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2895043' 00:26:09.571 killing process with pid 2895043 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2895043 00:26:09.571 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2895043 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.833 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.746 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.746 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:26:11.746 00:26:11.746 real 0m53.253s 00:26:11.746 user 2m49.649s 00:26:11.746 sys 0m11.730s 00:26:11.746 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:11.746 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:11.746 ************************************ 00:26:11.746 END TEST nvmf_perf_adq 00:26:11.746 ************************************ 00:26:12.006 06:36:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:12.006 06:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:12.006 06:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:12.007 06:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:12.007 ************************************ 00:26:12.007 START TEST nvmf_shutdown 00:26:12.007 ************************************ 00:26:12.007 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:12.007 * Looking for test storage... 00:26:12.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:12.007 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:12.007 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:26:12.007 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:12.267 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:12.267 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.267 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.267 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.267 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.267 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.267 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.267 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.267 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.267 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:12.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.268 --rc genhtml_branch_coverage=1 00:26:12.268 --rc genhtml_function_coverage=1 00:26:12.268 --rc genhtml_legend=1 00:26:12.268 --rc geninfo_all_blocks=1 00:26:12.268 --rc geninfo_unexecuted_blocks=1 00:26:12.268 00:26:12.268 ' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:12.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.268 --rc genhtml_branch_coverage=1 00:26:12.268 --rc genhtml_function_coverage=1 00:26:12.268 --rc genhtml_legend=1 00:26:12.268 --rc geninfo_all_blocks=1 00:26:12.268 --rc geninfo_unexecuted_blocks=1 00:26:12.268 00:26:12.268 ' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:12.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.268 --rc genhtml_branch_coverage=1 00:26:12.268 --rc genhtml_function_coverage=1 00:26:12.268 --rc genhtml_legend=1 00:26:12.268 --rc geninfo_all_blocks=1 00:26:12.268 --rc geninfo_unexecuted_blocks=1 00:26:12.268 00:26:12.268 ' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:12.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.268 --rc genhtml_branch_coverage=1 00:26:12.268 --rc genhtml_function_coverage=1 00:26:12.268 --rc genhtml_legend=1 00:26:12.268 --rc geninfo_all_blocks=1 00:26:12.268 --rc geninfo_unexecuted_blocks=1 00:26:12.268 00:26:12.268 ' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:12.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:12.268 ************************************ 00:26:12.268 START TEST nvmf_shutdown_tc1 00:26:12.268 ************************************ 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.268 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.269 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.269 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.269 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.269 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.269 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:12.269 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:12.269 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:12.269 06:36:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:20.413 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:20.413 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:20.413 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:20.413 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:20.413 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:20.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:26:20.414 00:26:20.414 --- 10.0.0.2 ping statistics --- 00:26:20.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.414 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:26:20.414 00:26:20.414 --- 10.0.0.1 ping statistics --- 00:26:20.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.414 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2901533 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2901533 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2901533 ']' 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:20.414 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.414 [2024-11-20 06:36:39.968592] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:26:20.414 [2024-11-20 06:36:39.968660] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.414 [2024-11-20 06:36:40.069933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.414 [2024-11-20 06:36:40.122945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.414 [2024-11-20 06:36:40.122997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.414 [2024-11-20 06:36:40.123007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.414 [2024-11-20 06:36:40.123015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.414 [2024-11-20 06:36:40.123021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.414 [2024-11-20 06:36:40.125087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.414 [2024-11-20 06:36:40.125238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.414 [2024-11-20 06:36:40.125460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:20.414 [2024-11-20 06:36:40.125462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.676 [2024-11-20 06:36:40.853741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.676 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.937 Malloc1 00:26:20.937 [2024-11-20 06:36:40.989564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.937 Malloc2 00:26:20.937 Malloc3 00:26:20.937 Malloc4 00:26:20.937 Malloc5 00:26:20.937 Malloc6 00:26:21.217 Malloc7 00:26:21.217 Malloc8 00:26:21.217 Malloc9 00:26:21.217 Malloc10 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2901914 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2901914 /var/tmp/bdevperf.sock 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2901914 ']' 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:21.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.217 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.217 { 00:26:21.217 "params": { 00:26:21.217 "name": "Nvme$subsystem", 00:26:21.217 "trtype": "$TEST_TRANSPORT", 00:26:21.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.217 "adrfam": "ipv4", 00:26:21.218 "trsvcid": "$NVMF_PORT", 00:26:21.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.218 "hdgst": ${hdgst:-false}, 00:26:21.218 "ddgst": ${ddgst:-false} 00:26:21.218 }, 00:26:21.218 "method": "bdev_nvme_attach_controller" 00:26:21.218 } 00:26:21.218 EOF 00:26:21.218 )") 00:26:21.218 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:21.218 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.218 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.218 { 00:26:21.218 "params": { 00:26:21.218 "name": "Nvme$subsystem", 00:26:21.218 "trtype": "$TEST_TRANSPORT", 00:26:21.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.218 "adrfam": "ipv4", 00:26:21.218 "trsvcid": "$NVMF_PORT", 00:26:21.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.218 "hdgst": ${hdgst:-false}, 00:26:21.218 "ddgst": ${ddgst:-false} 00:26:21.218 }, 00:26:21.218 "method": "bdev_nvme_attach_controller" 00:26:21.218 } 00:26:21.218 EOF 00:26:21.218 )") 00:26:21.218 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:21.218 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.218 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.218 { 00:26:21.218 "params": { 00:26:21.218 "name": "Nvme$subsystem", 00:26:21.218 "trtype": "$TEST_TRANSPORT", 00:26:21.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.218 "adrfam": "ipv4", 00:26:21.218 "trsvcid": "$NVMF_PORT", 00:26:21.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.218 "hdgst": ${hdgst:-false}, 00:26:21.218 "ddgst": ${ddgst:-false} 00:26:21.218 }, 00:26:21.218 "method": "bdev_nvme_attach_controller" 00:26:21.218 } 00:26:21.218 EOF 00:26:21.218 )") 00:26:21.218 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:21.218 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.218 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.218 { 00:26:21.218 "params": { 00:26:21.218 "name": "Nvme$subsystem", 00:26:21.218 "trtype": "$TEST_TRANSPORT", 00:26:21.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.218 "adrfam": "ipv4", 00:26:21.218 "trsvcid": "$NVMF_PORT", 00:26:21.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.218 "hdgst": ${hdgst:-false}, 00:26:21.218 "ddgst": ${ddgst:-false} 00:26:21.218 }, 00:26:21.218 "method": "bdev_nvme_attach_controller" 00:26:21.218 } 00:26:21.218 EOF 00:26:21.218 )") 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.522 { 00:26:21.522 "params": { 00:26:21.522 "name": "Nvme$subsystem", 00:26:21.522 "trtype": "$TEST_TRANSPORT", 00:26:21.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.522 "adrfam": "ipv4", 00:26:21.522 "trsvcid": "$NVMF_PORT", 00:26:21.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.522 "hdgst": ${hdgst:-false}, 00:26:21.522 "ddgst": ${ddgst:-false} 00:26:21.522 }, 00:26:21.522 "method": "bdev_nvme_attach_controller" 00:26:21.522 } 00:26:21.522 EOF 00:26:21.522 )") 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.522 { 00:26:21.522 "params": { 00:26:21.522 "name": "Nvme$subsystem", 00:26:21.522 "trtype": "$TEST_TRANSPORT", 00:26:21.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.522 "adrfam": "ipv4", 00:26:21.522 "trsvcid": "$NVMF_PORT", 00:26:21.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.522 "hdgst": ${hdgst:-false}, 00:26:21.522 "ddgst": ${ddgst:-false} 00:26:21.522 }, 00:26:21.522 "method": "bdev_nvme_attach_controller" 00:26:21.522 } 00:26:21.522 EOF 00:26:21.522 )") 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:21.522 [2024-11-20 06:36:41.505078] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:26:21.522 [2024-11-20 06:36:41.505153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.522 { 00:26:21.522 "params": { 00:26:21.522 "name": "Nvme$subsystem", 00:26:21.522 "trtype": "$TEST_TRANSPORT", 00:26:21.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.522 "adrfam": "ipv4", 00:26:21.522 "trsvcid": "$NVMF_PORT", 00:26:21.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.522 "hdgst": ${hdgst:-false}, 00:26:21.522 "ddgst": ${ddgst:-false} 00:26:21.522 }, 00:26:21.522 "method": "bdev_nvme_attach_controller" 00:26:21.522 } 00:26:21.522 EOF 00:26:21.522 )") 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.522 { 00:26:21.522 "params": { 00:26:21.522 "name": "Nvme$subsystem", 00:26:21.522 "trtype": "$TEST_TRANSPORT", 00:26:21.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.522 "adrfam": "ipv4", 00:26:21.522 "trsvcid": "$NVMF_PORT", 00:26:21.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.522 "hdgst": ${hdgst:-false}, 00:26:21.522 "ddgst": ${ddgst:-false} 00:26:21.522 }, 00:26:21.522 "method": "bdev_nvme_attach_controller" 00:26:21.522 } 00:26:21.522 EOF 00:26:21.522 )") 00:26:21.522 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:21.523 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.523 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.523 { 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme$subsystem", 00:26:21.523 "trtype": "$TEST_TRANSPORT", 00:26:21.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "$NVMF_PORT", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.523 "hdgst": ${hdgst:-false}, 00:26:21.523 "ddgst": ${ddgst:-false} 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 } 00:26:21.523 EOF 00:26:21.523 )") 00:26:21.523 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:21.523 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.523 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.523 { 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme$subsystem", 00:26:21.523 "trtype": "$TEST_TRANSPORT", 00:26:21.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "$NVMF_PORT", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.523 "hdgst": ${hdgst:-false}, 00:26:21.523 "ddgst": ${ddgst:-false} 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 } 00:26:21.523 EOF 00:26:21.523 )") 00:26:21.523 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:21.523 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:21.523 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:21.523 06:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme1", 00:26:21.523 "trtype": "tcp", 00:26:21.523 "traddr": "10.0.0.2", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "4420", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:21.523 "hdgst": false, 00:26:21.523 "ddgst": false 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 },{ 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme2", 00:26:21.523 "trtype": "tcp", 00:26:21.523 "traddr": "10.0.0.2", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "4420", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:21.523 "hdgst": false, 00:26:21.523 "ddgst": false 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 },{ 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme3", 00:26:21.523 "trtype": "tcp", 00:26:21.523 "traddr": "10.0.0.2", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "4420", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:21.523 "hdgst": false, 00:26:21.523 "ddgst": false 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 },{ 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme4", 00:26:21.523 "trtype": "tcp", 00:26:21.523 "traddr": "10.0.0.2", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "4420", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:21.523 "hdgst": false, 00:26:21.523 "ddgst": false 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 },{ 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme5", 00:26:21.523 "trtype": "tcp", 00:26:21.523 "traddr": "10.0.0.2", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "4420", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:21.523 "hdgst": false, 00:26:21.523 "ddgst": false 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 },{ 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme6", 00:26:21.523 "trtype": "tcp", 00:26:21.523 "traddr": "10.0.0.2", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "4420", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:21.523 "hdgst": false, 00:26:21.523 "ddgst": false 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 },{ 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme7", 00:26:21.523 "trtype": "tcp", 00:26:21.523 "traddr": "10.0.0.2", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "4420", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:21.523 "hdgst": false, 00:26:21.523 "ddgst": false 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 },{ 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme8", 00:26:21.523 "trtype": "tcp", 00:26:21.523 "traddr": "10.0.0.2", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "4420", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:21.523 "hdgst": false, 00:26:21.523 "ddgst": false 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 },{ 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme9", 00:26:21.523 "trtype": "tcp", 00:26:21.523 "traddr": "10.0.0.2", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "4420", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:21.523 "hdgst": false, 00:26:21.523 "ddgst": false 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 },{ 00:26:21.523 "params": { 00:26:21.523 "name": "Nvme10", 00:26:21.523 "trtype": "tcp", 00:26:21.523 "traddr": "10.0.0.2", 00:26:21.523 "adrfam": "ipv4", 00:26:21.523 "trsvcid": "4420", 00:26:21.523 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:21.523 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:21.523 "hdgst": false, 00:26:21.523 "ddgst": false 00:26:21.523 }, 00:26:21.523 "method": "bdev_nvme_attach_controller" 00:26:21.523 }' 00:26:21.523 [2024-11-20 06:36:41.601642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.523 [2024-11-20 06:36:41.655100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.957 06:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:22.957 06:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:26:22.957 06:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:22.957 06:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.957 06:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:22.958 06:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.958 06:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2901914 00:26:22.958 06:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:22.958 06:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:23.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2901914 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2901533 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:23.900 { 00:26:23.900 "params": { 00:26:23.900 "name": "Nvme$subsystem", 00:26:23.900 "trtype": "$TEST_TRANSPORT", 00:26:23.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.900 "adrfam": "ipv4", 00:26:23.900 "trsvcid": "$NVMF_PORT", 00:26:23.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.900 "hdgst": ${hdgst:-false}, 00:26:23.900 "ddgst": ${ddgst:-false} 00:26:23.900 }, 00:26:23.900 "method": "bdev_nvme_attach_controller" 00:26:23.900 } 00:26:23.900 EOF 00:26:23.900 )") 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:23.900 { 00:26:23.900 "params": { 00:26:23.900 "name": "Nvme$subsystem", 00:26:23.900 "trtype": "$TEST_TRANSPORT", 00:26:23.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.900 "adrfam": "ipv4", 00:26:23.900 "trsvcid": "$NVMF_PORT", 00:26:23.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.900 "hdgst": ${hdgst:-false}, 00:26:23.900 "ddgst": ${ddgst:-false} 00:26:23.900 }, 00:26:23.900 "method": "bdev_nvme_attach_controller" 00:26:23.900 } 00:26:23.900 EOF 00:26:23.900 )") 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:23.900 { 00:26:23.900 "params": { 00:26:23.900 "name": "Nvme$subsystem", 00:26:23.900 "trtype": "$TEST_TRANSPORT", 00:26:23.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.900 "adrfam": "ipv4", 00:26:23.900 "trsvcid": "$NVMF_PORT", 00:26:23.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.900 "hdgst": ${hdgst:-false}, 00:26:23.900 "ddgst": ${ddgst:-false} 00:26:23.900 }, 00:26:23.900 "method": "bdev_nvme_attach_controller" 00:26:23.900 } 00:26:23.900 EOF 00:26:23.900 )") 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:23.900 { 00:26:23.900 "params": { 00:26:23.900 "name": "Nvme$subsystem", 00:26:23.900 "trtype": "$TEST_TRANSPORT", 00:26:23.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.900 "adrfam": "ipv4", 00:26:23.900 "trsvcid": "$NVMF_PORT", 00:26:23.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.900 "hdgst": ${hdgst:-false}, 00:26:23.900 "ddgst": ${ddgst:-false} 00:26:23.900 }, 00:26:23.900 "method": "bdev_nvme_attach_controller" 00:26:23.900 } 00:26:23.900 EOF 00:26:23.900 )") 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:23.900 { 00:26:23.900 "params": { 00:26:23.900 "name": "Nvme$subsystem", 00:26:23.900 "trtype": "$TEST_TRANSPORT", 00:26:23.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.900 "adrfam": "ipv4", 00:26:23.900 "trsvcid": "$NVMF_PORT", 00:26:23.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.900 "hdgst": ${hdgst:-false}, 00:26:23.900 "ddgst": ${ddgst:-false} 00:26:23.900 }, 00:26:23.900 "method": "bdev_nvme_attach_controller" 00:26:23.900 } 00:26:23.900 EOF 00:26:23.900 )") 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:23.900 { 00:26:23.900 "params": { 00:26:23.900 "name": "Nvme$subsystem", 00:26:23.900 "trtype": "$TEST_TRANSPORT", 00:26:23.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.900 "adrfam": "ipv4", 00:26:23.900 "trsvcid": "$NVMF_PORT", 00:26:23.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.900 "hdgst": ${hdgst:-false}, 00:26:23.900 "ddgst": ${ddgst:-false} 00:26:23.900 }, 00:26:23.900 "method": "bdev_nvme_attach_controller" 00:26:23.900 } 00:26:23.900 EOF 00:26:23.900 )") 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:23.900 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:23.900 { 00:26:23.900 "params": { 00:26:23.900 "name": "Nvme$subsystem", 00:26:23.900 "trtype": "$TEST_TRANSPORT", 00:26:23.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.900 "adrfam": "ipv4", 00:26:23.900 "trsvcid": "$NVMF_PORT", 00:26:23.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.901 "hdgst": ${hdgst:-false}, 00:26:23.901 "ddgst": ${ddgst:-false} 00:26:23.901 }, 00:26:23.901 "method": "bdev_nvme_attach_controller" 00:26:23.901 } 00:26:23.901 EOF 00:26:23.901 )") 00:26:23.901 [2024-11-20 06:36:44.164822] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:26:23.901 [2024-11-20 06:36:44.164880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902480 ] 00:26:23.901 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:23.901 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:23.901 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:23.901 { 00:26:23.901 "params": { 00:26:23.901 "name": "Nvme$subsystem", 00:26:23.901 "trtype": "$TEST_TRANSPORT", 00:26:23.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.901 "adrfam": "ipv4", 00:26:23.901 "trsvcid": "$NVMF_PORT", 00:26:23.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.901 "hdgst": ${hdgst:-false}, 00:26:23.901 "ddgst": ${ddgst:-false} 00:26:23.901 }, 00:26:23.901 "method": "bdev_nvme_attach_controller" 00:26:23.901 } 00:26:23.901 EOF 00:26:23.901 )") 00:26:23.901 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:24.163 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:24.163 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:24.163 { 00:26:24.163 "params": { 00:26:24.163 "name": "Nvme$subsystem", 00:26:24.163 "trtype": "$TEST_TRANSPORT", 00:26:24.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.163 "adrfam": "ipv4", 00:26:24.163 "trsvcid": "$NVMF_PORT", 00:26:24.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.163 "hdgst": ${hdgst:-false}, 00:26:24.163 "ddgst": ${ddgst:-false} 00:26:24.163 }, 00:26:24.163 "method": "bdev_nvme_attach_controller" 00:26:24.163 } 00:26:24.163 EOF 00:26:24.163 )") 00:26:24.163 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:24.163 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:24.163 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:24.163 { 00:26:24.163 "params": { 00:26:24.163 "name": "Nvme$subsystem", 00:26:24.163 "trtype": "$TEST_TRANSPORT", 00:26:24.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.163 "adrfam": "ipv4", 00:26:24.163 "trsvcid": "$NVMF_PORT", 00:26:24.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.163 "hdgst": ${hdgst:-false}, 00:26:24.163 "ddgst": ${ddgst:-false} 00:26:24.163 }, 00:26:24.163 "method": "bdev_nvme_attach_controller" 00:26:24.163 } 00:26:24.163 EOF 00:26:24.163 )") 00:26:24.163 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:24.163 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:24.163 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:24.163 06:36:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:24.163 "params": { 00:26:24.163 "name": "Nvme1", 00:26:24.163 "trtype": "tcp", 00:26:24.163 "traddr": "10.0.0.2", 00:26:24.163 "adrfam": "ipv4", 00:26:24.163 "trsvcid": "4420", 00:26:24.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:24.163 "hdgst": false, 00:26:24.163 "ddgst": false 00:26:24.163 }, 00:26:24.163 "method": "bdev_nvme_attach_controller" 00:26:24.163 },{ 00:26:24.163 "params": { 00:26:24.163 "name": "Nvme2", 00:26:24.163 "trtype": "tcp", 00:26:24.163 "traddr": "10.0.0.2", 00:26:24.163 "adrfam": "ipv4", 00:26:24.163 "trsvcid": "4420", 00:26:24.163 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:24.163 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:24.163 "hdgst": false, 00:26:24.163 "ddgst": false 00:26:24.163 }, 00:26:24.163 "method": "bdev_nvme_attach_controller" 00:26:24.163 },{ 00:26:24.163 "params": { 00:26:24.163 "name": "Nvme3", 00:26:24.163 "trtype": "tcp", 00:26:24.163 "traddr": "10.0.0.2", 00:26:24.163 "adrfam": "ipv4", 00:26:24.163 "trsvcid": "4420", 00:26:24.163 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:24.163 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:24.163 "hdgst": false, 00:26:24.163 "ddgst": false 00:26:24.163 }, 00:26:24.163 "method": "bdev_nvme_attach_controller" 00:26:24.163 },{ 00:26:24.163 "params": { 00:26:24.163 "name": "Nvme4", 00:26:24.163 "trtype": "tcp", 00:26:24.163 "traddr": "10.0.0.2", 00:26:24.163 "adrfam": "ipv4", 00:26:24.163 "trsvcid": "4420", 00:26:24.163 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:24.163 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:24.163 "hdgst": false, 00:26:24.163 "ddgst": false 00:26:24.164 }, 00:26:24.164 "method": "bdev_nvme_attach_controller" 00:26:24.164 },{ 00:26:24.164 "params": { 00:26:24.164 "name": "Nvme5", 00:26:24.164 "trtype": "tcp", 00:26:24.164 "traddr": "10.0.0.2", 00:26:24.164 "adrfam": "ipv4", 00:26:24.164 "trsvcid": "4420", 00:26:24.164 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:24.164 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:24.164 "hdgst": false, 00:26:24.164 "ddgst": false 00:26:24.164 }, 00:26:24.164 "method": "bdev_nvme_attach_controller" 00:26:24.164 },{ 00:26:24.164 "params": { 00:26:24.164 "name": "Nvme6", 00:26:24.164 "trtype": "tcp", 00:26:24.164 "traddr": "10.0.0.2", 00:26:24.164 "adrfam": "ipv4", 00:26:24.164 "trsvcid": "4420", 00:26:24.164 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:24.164 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:24.164 "hdgst": false, 00:26:24.164 "ddgst": false 00:26:24.164 }, 00:26:24.164 "method": "bdev_nvme_attach_controller" 00:26:24.164 },{ 00:26:24.164 "params": { 00:26:24.164 "name": "Nvme7", 00:26:24.164 "trtype": "tcp", 00:26:24.164 "traddr": "10.0.0.2", 00:26:24.164 "adrfam": "ipv4", 00:26:24.164 "trsvcid": "4420", 00:26:24.164 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:24.164 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:24.164 "hdgst": false, 00:26:24.164 "ddgst": false 00:26:24.164 }, 00:26:24.164 "method": "bdev_nvme_attach_controller" 00:26:24.164 },{ 00:26:24.164 "params": { 00:26:24.164 "name": "Nvme8", 00:26:24.164 "trtype": "tcp", 00:26:24.164 "traddr": "10.0.0.2", 00:26:24.164 "adrfam": "ipv4", 00:26:24.164 "trsvcid": "4420", 00:26:24.164 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:24.164 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:24.164 "hdgst": false, 00:26:24.164 "ddgst": false 00:26:24.164 }, 00:26:24.164 "method": "bdev_nvme_attach_controller" 00:26:24.164 },{ 00:26:24.164 "params": { 00:26:24.164 "name": "Nvme9", 00:26:24.164 "trtype": "tcp", 00:26:24.164 "traddr": "10.0.0.2", 00:26:24.164 "adrfam": "ipv4", 00:26:24.164 "trsvcid": "4420", 00:26:24.164 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:24.164 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:24.164 "hdgst": false, 00:26:24.164 "ddgst": false 00:26:24.164 }, 00:26:24.164 "method": "bdev_nvme_attach_controller" 00:26:24.164 },{ 00:26:24.164 "params": { 00:26:24.164 "name": "Nvme10", 00:26:24.164 "trtype": "tcp", 00:26:24.164 "traddr": "10.0.0.2", 00:26:24.164 "adrfam": "ipv4", 00:26:24.164 "trsvcid": "4420", 00:26:24.164 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:24.164 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:24.164 "hdgst": false, 00:26:24.164 "ddgst": false 00:26:24.164 }, 00:26:24.164 "method": "bdev_nvme_attach_controller" 00:26:24.164 }' 00:26:24.164 [2024-11-20 06:36:44.255184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.164 [2024-11-20 06:36:44.290977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.551 Running I/O for 1 seconds... 00:26:26.938 1860.00 IOPS, 116.25 MiB/s 00:26:26.938 Latency(us) 00:26:26.938 [2024-11-20T05:36:47.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.938 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.938 Verification LBA range: start 0x0 length 0x400 00:26:26.938 Nvme1n1 : 1.15 222.46 13.90 0.00 0.00 282303.15 16711.68 249910.61 00:26:26.938 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.938 Verification LBA range: start 0x0 length 0x400 00:26:26.938 Nvme2n1 : 1.14 224.46 14.03 0.00 0.00 276217.17 20971.52 267386.88 00:26:26.938 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.938 Verification LBA range: start 0x0 length 0x400 00:26:26.938 Nvme3n1 : 1.08 236.37 14.77 0.00 0.00 258116.05 19223.89 258648.75 00:26:26.938 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.938 Verification LBA range: start 0x0 length 0x400 00:26:26.938 Nvme4n1 : 1.18 271.13 16.95 0.00 0.00 222049.96 19223.89 242920.11 00:26:26.938 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.938 Verification LBA range: start 0x0 length 0x400 00:26:26.938 Nvme5n1 : 1.14 224.69 14.04 0.00 0.00 262554.88 17367.04 244667.73 00:26:26.938 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.938 Verification LBA range: start 0x0 length 0x400 00:26:26.938 Nvme6n1 : 1.18 270.05 16.88 0.00 0.00 214326.27 19660.80 227191.47 00:26:26.938 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.938 Verification LBA range: start 0x0 length 0x400 00:26:26.938 Nvme7n1 : 1.19 269.27 16.83 0.00 0.00 212013.31 12724.91 242920.11 00:26:26.938 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.938 Verification LBA range: start 0x0 length 0x400 00:26:26.938 Nvme8n1 : 1.15 223.35 13.96 0.00 0.00 249738.88 17803.95 253405.87 00:26:26.938 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.938 Verification LBA range: start 0x0 length 0x400 00:26:26.938 Nvme9n1 : 1.18 217.38 13.59 0.00 0.00 252823.25 26214.40 274377.39 00:26:26.938 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.938 Verification LBA range: start 0x0 length 0x400 00:26:26.938 Nvme10n1 : 1.20 266.57 16.66 0.00 0.00 202845.10 9666.56 235929.60 00:26:26.938 [2024-11-20T05:36:47.217Z] =================================================================================================================== 00:26:26.938 [2024-11-20T05:36:47.217Z] Total : 2425.73 151.61 0.00 0.00 240526.97 9666.56 274377.39 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:26.938 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:26.938 rmmod nvme_tcp 00:26:26.938 rmmod nvme_fabrics 00:26:27.199 rmmod nvme_keyring 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2901533 ']' 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2901533 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 2901533 ']' 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 2901533 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2901533 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2901533' 00:26:27.200 killing process with pid 2901533 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 2901533 00:26:27.200 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 2901533 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.461 06:36:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.375 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.375 00:26:29.375 real 0m17.234s 00:26:29.375 user 0m36.053s 00:26:29.375 sys 0m6.913s 00:26:29.375 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:29.375 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:29.375 ************************************ 00:26:29.375 END TEST nvmf_shutdown_tc1 00:26:29.375 ************************************ 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:29.636 ************************************ 00:26:29.636 START TEST nvmf_shutdown_tc2 00:26:29.636 ************************************ 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:29.636 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.636 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:29.637 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:29.637 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:29.637 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:29.637 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.898 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.898 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.898 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.898 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:26:29.899 00:26:29.899 --- 10.0.0.2 ping statistics --- 00:26:29.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.899 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:26:29.899 06:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:26:29.899 00:26:29.899 --- 10.0.0.1 ping statistics --- 00:26:29.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.899 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2903733 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2903733 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2903733 ']' 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:29.899 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.899 [2024-11-20 06:36:50.125369] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:26:29.899 [2024-11-20 06:36:50.125434] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.160 [2024-11-20 06:36:50.223927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:30.160 [2024-11-20 06:36:50.258287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.160 [2024-11-20 06:36:50.258319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.160 [2024-11-20 06:36:50.258329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.160 [2024-11-20 06:36:50.258334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.160 [2024-11-20 06:36:50.258338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.160 [2024-11-20 06:36:50.259653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.160 [2024-11-20 06:36:50.259805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:30.160 [2024-11-20 06:36:50.259954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.160 [2024-11-20 06:36:50.259957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.731 [2024-11-20 06:36:50.967586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:30.731 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:30.731 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:30.731 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.992 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.992 Malloc1 00:26:30.992 [2024-11-20 06:36:51.078861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.992 Malloc2 00:26:30.992 Malloc3 00:26:30.992 Malloc4 00:26:30.992 Malloc5 00:26:30.992 Malloc6 00:26:31.253 Malloc7 00:26:31.253 Malloc8 00:26:31.253 Malloc9 00:26:31.253 Malloc10 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2904108 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2904108 /var/tmp/bdevperf.sock 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2904108 ']' 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:31.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:31.253 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.254 { 00:26:31.254 "params": { 00:26:31.254 "name": "Nvme$subsystem", 00:26:31.254 "trtype": "$TEST_TRANSPORT", 00:26:31.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.254 "adrfam": "ipv4", 00:26:31.254 "trsvcid": "$NVMF_PORT", 00:26:31.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.254 "hdgst": ${hdgst:-false}, 00:26:31.254 "ddgst": ${ddgst:-false} 00:26:31.254 }, 00:26:31.254 "method": "bdev_nvme_attach_controller" 00:26:31.254 } 00:26:31.254 EOF 00:26:31.254 )") 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.254 { 00:26:31.254 "params": { 00:26:31.254 "name": "Nvme$subsystem", 00:26:31.254 "trtype": "$TEST_TRANSPORT", 00:26:31.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.254 "adrfam": "ipv4", 00:26:31.254 "trsvcid": "$NVMF_PORT", 00:26:31.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.254 "hdgst": ${hdgst:-false}, 00:26:31.254 "ddgst": ${ddgst:-false} 00:26:31.254 }, 00:26:31.254 "method": "bdev_nvme_attach_controller" 00:26:31.254 } 00:26:31.254 EOF 00:26:31.254 )") 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.254 { 00:26:31.254 "params": { 00:26:31.254 "name": "Nvme$subsystem", 00:26:31.254 "trtype": "$TEST_TRANSPORT", 00:26:31.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.254 "adrfam": "ipv4", 00:26:31.254 "trsvcid": "$NVMF_PORT", 00:26:31.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.254 "hdgst": ${hdgst:-false}, 00:26:31.254 "ddgst": ${ddgst:-false} 00:26:31.254 }, 00:26:31.254 "method": "bdev_nvme_attach_controller" 00:26:31.254 } 00:26:31.254 EOF 00:26:31.254 )") 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.254 { 00:26:31.254 "params": { 00:26:31.254 "name": "Nvme$subsystem", 00:26:31.254 "trtype": "$TEST_TRANSPORT", 00:26:31.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.254 "adrfam": "ipv4", 00:26:31.254 "trsvcid": "$NVMF_PORT", 00:26:31.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.254 "hdgst": ${hdgst:-false}, 00:26:31.254 "ddgst": ${ddgst:-false} 00:26:31.254 }, 00:26:31.254 "method": "bdev_nvme_attach_controller" 00:26:31.254 } 00:26:31.254 EOF 00:26:31.254 )") 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.254 { 00:26:31.254 "params": { 00:26:31.254 "name": "Nvme$subsystem", 00:26:31.254 "trtype": "$TEST_TRANSPORT", 00:26:31.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.254 "adrfam": "ipv4", 00:26:31.254 "trsvcid": "$NVMF_PORT", 00:26:31.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.254 "hdgst": ${hdgst:-false}, 00:26:31.254 "ddgst": ${ddgst:-false} 00:26:31.254 }, 00:26:31.254 "method": "bdev_nvme_attach_controller" 00:26:31.254 } 00:26:31.254 EOF 00:26:31.254 )") 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.254 { 00:26:31.254 "params": { 00:26:31.254 "name": "Nvme$subsystem", 00:26:31.254 "trtype": "$TEST_TRANSPORT", 00:26:31.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.254 "adrfam": "ipv4", 00:26:31.254 "trsvcid": "$NVMF_PORT", 00:26:31.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.254 "hdgst": ${hdgst:-false}, 00:26:31.254 "ddgst": ${ddgst:-false} 00:26:31.254 }, 00:26:31.254 "method": "bdev_nvme_attach_controller" 00:26:31.254 } 00:26:31.254 EOF 00:26:31.254 )") 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.254 { 00:26:31.254 "params": { 00:26:31.254 "name": "Nvme$subsystem", 00:26:31.254 "trtype": "$TEST_TRANSPORT", 00:26:31.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.254 "adrfam": "ipv4", 00:26:31.254 "trsvcid": "$NVMF_PORT", 00:26:31.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.254 "hdgst": ${hdgst:-false}, 00:26:31.254 "ddgst": ${ddgst:-false} 00:26:31.254 }, 00:26:31.254 "method": "bdev_nvme_attach_controller" 00:26:31.254 } 00:26:31.254 EOF 00:26:31.254 )") 00:26:31.254 [2024-11-20 06:36:51.525431] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:26:31.254 [2024-11-20 06:36:51.525484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904108 ] 00:26:31.254 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.516 { 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme$subsystem", 00:26:31.516 "trtype": "$TEST_TRANSPORT", 00:26:31.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "$NVMF_PORT", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.516 "hdgst": ${hdgst:-false}, 00:26:31.516 "ddgst": ${ddgst:-false} 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 } 00:26:31.516 EOF 00:26:31.516 )") 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.516 { 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme$subsystem", 00:26:31.516 "trtype": "$TEST_TRANSPORT", 00:26:31.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "$NVMF_PORT", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.516 "hdgst": ${hdgst:-false}, 00:26:31.516 "ddgst": ${ddgst:-false} 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 } 00:26:31.516 EOF 00:26:31.516 )") 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:31.516 { 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme$subsystem", 00:26:31.516 "trtype": "$TEST_TRANSPORT", 00:26:31.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "$NVMF_PORT", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.516 "hdgst": ${hdgst:-false}, 00:26:31.516 "ddgst": ${ddgst:-false} 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 } 00:26:31.516 EOF 00:26:31.516 )") 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:26:31.516 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme1", 00:26:31.516 "trtype": "tcp", 00:26:31.516 "traddr": "10.0.0.2", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "4420", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:31.516 "hdgst": false, 00:26:31.516 "ddgst": false 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 },{ 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme2", 00:26:31.516 "trtype": "tcp", 00:26:31.516 "traddr": "10.0.0.2", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "4420", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:31.516 "hdgst": false, 00:26:31.516 "ddgst": false 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 },{ 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme3", 00:26:31.516 "trtype": "tcp", 00:26:31.516 "traddr": "10.0.0.2", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "4420", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:31.516 "hdgst": false, 00:26:31.516 "ddgst": false 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 },{ 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme4", 00:26:31.516 "trtype": "tcp", 00:26:31.516 "traddr": "10.0.0.2", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "4420", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:31.516 "hdgst": false, 00:26:31.516 "ddgst": false 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 },{ 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme5", 00:26:31.516 "trtype": "tcp", 00:26:31.516 "traddr": "10.0.0.2", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "4420", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:31.516 "hdgst": false, 00:26:31.516 "ddgst": false 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 },{ 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme6", 00:26:31.516 "trtype": "tcp", 00:26:31.516 "traddr": "10.0.0.2", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "4420", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:31.516 "hdgst": false, 00:26:31.516 "ddgst": false 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 },{ 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme7", 00:26:31.516 "trtype": "tcp", 00:26:31.516 "traddr": "10.0.0.2", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "4420", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:31.516 "hdgst": false, 00:26:31.516 "ddgst": false 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 },{ 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme8", 00:26:31.516 "trtype": "tcp", 00:26:31.516 "traddr": "10.0.0.2", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "4420", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:31.516 "hdgst": false, 00:26:31.516 "ddgst": false 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 },{ 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme9", 00:26:31.516 "trtype": "tcp", 00:26:31.516 "traddr": "10.0.0.2", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "4420", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:31.516 "hdgst": false, 00:26:31.516 "ddgst": false 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 },{ 00:26:31.516 "params": { 00:26:31.516 "name": "Nvme10", 00:26:31.516 "trtype": "tcp", 00:26:31.516 "traddr": "10.0.0.2", 00:26:31.516 "adrfam": "ipv4", 00:26:31.516 "trsvcid": "4420", 00:26:31.516 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:31.516 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:31.516 "hdgst": false, 00:26:31.516 "ddgst": false 00:26:31.516 }, 00:26:31.516 "method": "bdev_nvme_attach_controller" 00:26:31.516 }' 00:26:31.516 [2024-11-20 06:36:51.613921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.516 [2024-11-20 06:36:51.650098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.428 Running I/O for 10 seconds... 00:26:33.428 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:33.428 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:33.428 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:33.428 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.428 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.428 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.428 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:33.428 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:33.429 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:33.689 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:33.689 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:33.689 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:33.689 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:33.689 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.689 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.689 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.689 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:33.689 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:33.689 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2904108 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2904108 ']' 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2904108 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:33.949 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2904108 00:26:34.209 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:34.209 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:34.210 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2904108' 00:26:34.210 killing process with pid 2904108 00:26:34.210 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2904108 00:26:34.210 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2904108 00:26:34.210 Received shutdown signal, test time was about 0.982051 seconds 00:26:34.210 00:26:34.210 Latency(us) 00:26:34.210 [2024-11-20T05:36:54.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.210 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.210 Verification LBA range: start 0x0 length 0x400 00:26:34.210 Nvme1n1 : 0.96 200.85 12.55 0.00 0.00 315038.44 19223.89 258648.75 00:26:34.210 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.210 Verification LBA range: start 0x0 length 0x400 00:26:34.210 Nvme2n1 : 0.98 261.95 16.37 0.00 0.00 236621.65 20643.84 248162.99 00:26:34.210 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.210 Verification LBA range: start 0x0 length 0x400 00:26:34.210 Nvme3n1 : 0.96 266.89 16.68 0.00 0.00 227155.63 13981.01 249910.61 00:26:34.210 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.210 Verification LBA range: start 0x0 length 0x400 00:26:34.210 Nvme4n1 : 0.97 263.92 16.50 0.00 0.00 225043.63 17476.27 232434.35 00:26:34.210 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.210 Verification LBA range: start 0x0 length 0x400 00:26:34.210 Nvme5n1 : 0.95 202.65 12.67 0.00 0.00 285860.69 27852.80 255153.49 00:26:34.210 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.210 Verification LBA range: start 0x0 length 0x400 00:26:34.210 Nvme6n1 : 0.97 263.02 16.44 0.00 0.00 216302.93 13052.59 293601.28 00:26:34.210 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.210 Verification LBA range: start 0x0 length 0x400 00:26:34.210 Nvme7n1 : 0.96 265.81 16.61 0.00 0.00 208786.13 14964.05 248162.99 00:26:34.210 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.210 Verification LBA range: start 0x0 length 0x400 00:26:34.210 Nvme8n1 : 0.98 260.91 16.31 0.00 0.00 208717.01 17148.59 251658.24 00:26:34.210 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.210 Verification LBA range: start 0x0 length 0x400 00:26:34.210 Nvme9n1 : 0.97 198.76 12.42 0.00 0.00 266689.14 18240.85 279620.27 00:26:34.210 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.210 Verification LBA range: start 0x0 length 0x400 00:26:34.210 Nvme10n1 : 0.94 203.33 12.71 0.00 0.00 253088.71 15728.64 235929.60 00:26:34.210 [2024-11-20T05:36:54.489Z] =================================================================================================================== 00:26:34.210 [2024-11-20T05:36:54.489Z] Total : 2388.10 149.26 0.00 0.00 240348.30 13052.59 293601.28 00:26:34.210 06:36:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2903733 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:35.592 rmmod nvme_tcp 00:26:35.592 rmmod nvme_fabrics 00:26:35.592 rmmod nvme_keyring 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2903733 ']' 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2903733 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2903733 ']' 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2903733 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2903733 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2903733' 00:26:35.592 killing process with pid 2903733 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2903733 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2903733 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.592 06:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:38.137 00:26:38.137 real 0m8.172s 00:26:38.137 user 0m25.217s 00:26:38.137 sys 0m1.346s 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.137 ************************************ 00:26:38.137 END TEST nvmf_shutdown_tc2 00:26:38.137 ************************************ 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:38.137 ************************************ 00:26:38.137 START TEST nvmf_shutdown_tc3 00:26:38.137 ************************************ 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:38.137 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.137 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:38.137 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:38.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:38.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:38.138 06:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:38.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:26:38.138 00:26:38.138 --- 10.0.0.2 ping statistics --- 00:26:38.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.138 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:26:38.138 00:26:38.138 --- 10.0.0.1 ping statistics --- 00:26:38.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.138 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2905530 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2905530 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2905530 ']' 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:38.138 06:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:38.138 [2024-11-20 06:36:58.379048] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:26:38.138 [2024-11-20 06:36:58.379118] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.399 [2024-11-20 06:36:58.477237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:38.399 [2024-11-20 06:36:58.516201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.399 [2024-11-20 06:36:58.516238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.399 [2024-11-20 06:36:58.516244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.399 [2024-11-20 06:36:58.516254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.399 [2024-11-20 06:36:58.516258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.399 [2024-11-20 06:36:58.517693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.399 [2024-11-20 06:36:58.517851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:38.399 [2024-11-20 06:36:58.518004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.399 [2024-11-20 06:36:58.518006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:38.969 [2024-11-20 06:36:59.219548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.969 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.230 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:39.230 Malloc1 00:26:39.230 [2024-11-20 06:36:59.314092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.230 Malloc2 00:26:39.230 Malloc3 00:26:39.230 Malloc4 00:26:39.230 Malloc5 00:26:39.230 Malloc6 00:26:39.489 Malloc7 00:26:39.489 Malloc8 00:26:39.489 Malloc9 00:26:39.489 Malloc10 00:26:39.489 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.489 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:39.489 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2905752 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2905752 /var/tmp/bdevperf.sock 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2905752 ']' 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:39.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.490 { 00:26:39.490 "params": { 00:26:39.490 "name": "Nvme$subsystem", 00:26:39.490 "trtype": "$TEST_TRANSPORT", 00:26:39.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.490 "adrfam": "ipv4", 00:26:39.490 "trsvcid": "$NVMF_PORT", 00:26:39.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.490 "hdgst": ${hdgst:-false}, 00:26:39.490 "ddgst": ${ddgst:-false} 00:26:39.490 }, 00:26:39.490 "method": "bdev_nvme_attach_controller" 00:26:39.490 } 00:26:39.490 EOF 00:26:39.490 )") 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.490 { 00:26:39.490 "params": { 00:26:39.490 "name": "Nvme$subsystem", 00:26:39.490 "trtype": "$TEST_TRANSPORT", 00:26:39.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.490 "adrfam": "ipv4", 00:26:39.490 "trsvcid": "$NVMF_PORT", 00:26:39.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.490 "hdgst": ${hdgst:-false}, 00:26:39.490 "ddgst": ${ddgst:-false} 00:26:39.490 }, 00:26:39.490 "method": "bdev_nvme_attach_controller" 00:26:39.490 } 00:26:39.490 EOF 00:26:39.490 )") 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.490 { 00:26:39.490 "params": { 00:26:39.490 "name": "Nvme$subsystem", 00:26:39.490 "trtype": "$TEST_TRANSPORT", 00:26:39.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.490 "adrfam": "ipv4", 00:26:39.490 "trsvcid": "$NVMF_PORT", 00:26:39.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.490 "hdgst": ${hdgst:-false}, 00:26:39.490 "ddgst": ${ddgst:-false} 00:26:39.490 }, 00:26:39.490 "method": "bdev_nvme_attach_controller" 00:26:39.490 } 00:26:39.490 EOF 00:26:39.490 )") 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.490 { 00:26:39.490 "params": { 00:26:39.490 "name": "Nvme$subsystem", 00:26:39.490 "trtype": "$TEST_TRANSPORT", 00:26:39.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.490 "adrfam": "ipv4", 00:26:39.490 "trsvcid": "$NVMF_PORT", 00:26:39.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.490 "hdgst": ${hdgst:-false}, 00:26:39.490 "ddgst": ${ddgst:-false} 00:26:39.490 }, 00:26:39.490 "method": "bdev_nvme_attach_controller" 00:26:39.490 } 00:26:39.490 EOF 00:26:39.490 )") 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.490 { 00:26:39.490 "params": { 00:26:39.490 "name": "Nvme$subsystem", 00:26:39.490 "trtype": "$TEST_TRANSPORT", 00:26:39.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.490 "adrfam": "ipv4", 00:26:39.490 "trsvcid": "$NVMF_PORT", 00:26:39.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.490 "hdgst": ${hdgst:-false}, 00:26:39.490 "ddgst": ${ddgst:-false} 00:26:39.490 }, 00:26:39.490 "method": "bdev_nvme_attach_controller" 00:26:39.490 } 00:26:39.490 EOF 00:26:39.490 )") 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.490 { 00:26:39.490 "params": { 00:26:39.490 "name": "Nvme$subsystem", 00:26:39.490 "trtype": "$TEST_TRANSPORT", 00:26:39.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.490 "adrfam": "ipv4", 00:26:39.490 "trsvcid": "$NVMF_PORT", 00:26:39.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.490 "hdgst": ${hdgst:-false}, 00:26:39.490 "ddgst": ${ddgst:-false} 00:26:39.490 }, 00:26:39.490 "method": "bdev_nvme_attach_controller" 00:26:39.490 } 00:26:39.490 EOF 00:26:39.490 )") 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:39.490 [2024-11-20 06:36:59.754747] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:26:39.490 [2024-11-20 06:36:59.754801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905752 ] 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.490 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.490 { 00:26:39.490 "params": { 00:26:39.490 "name": "Nvme$subsystem", 00:26:39.490 "trtype": "$TEST_TRANSPORT", 00:26:39.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.490 "adrfam": "ipv4", 00:26:39.490 "trsvcid": "$NVMF_PORT", 00:26:39.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.491 "hdgst": ${hdgst:-false}, 00:26:39.491 "ddgst": ${ddgst:-false} 00:26:39.491 }, 00:26:39.491 "method": "bdev_nvme_attach_controller" 00:26:39.491 } 00:26:39.491 EOF 00:26:39.491 )") 00:26:39.491 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:39.491 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.491 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.491 { 00:26:39.491 "params": { 00:26:39.491 "name": "Nvme$subsystem", 00:26:39.491 "trtype": "$TEST_TRANSPORT", 00:26:39.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.491 "adrfam": "ipv4", 00:26:39.491 "trsvcid": "$NVMF_PORT", 00:26:39.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.491 "hdgst": ${hdgst:-false}, 00:26:39.491 "ddgst": ${ddgst:-false} 00:26:39.491 }, 00:26:39.491 "method": "bdev_nvme_attach_controller" 00:26:39.491 } 00:26:39.491 EOF 00:26:39.491 )") 00:26:39.491 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:39.750 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.750 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.750 { 00:26:39.750 "params": { 00:26:39.750 "name": "Nvme$subsystem", 00:26:39.750 "trtype": "$TEST_TRANSPORT", 00:26:39.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.750 "adrfam": "ipv4", 00:26:39.750 "trsvcid": "$NVMF_PORT", 00:26:39.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.750 "hdgst": ${hdgst:-false}, 00:26:39.750 "ddgst": ${ddgst:-false} 00:26:39.750 }, 00:26:39.750 "method": "bdev_nvme_attach_controller" 00:26:39.750 } 00:26:39.750 EOF 00:26:39.750 )") 00:26:39.750 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:39.750 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.750 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.750 { 00:26:39.750 "params": { 00:26:39.750 "name": "Nvme$subsystem", 00:26:39.750 "trtype": "$TEST_TRANSPORT", 00:26:39.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.750 "adrfam": "ipv4", 00:26:39.750 "trsvcid": "$NVMF_PORT", 00:26:39.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.750 "hdgst": ${hdgst:-false}, 00:26:39.750 "ddgst": ${ddgst:-false} 00:26:39.750 }, 00:26:39.750 "method": "bdev_nvme_attach_controller" 00:26:39.750 } 00:26:39.750 EOF 00:26:39.750 )") 00:26:39.750 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:39.750 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:26:39.750 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:26:39.750 06:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:39.750 "params": { 00:26:39.750 "name": "Nvme1", 00:26:39.750 "trtype": "tcp", 00:26:39.750 "traddr": "10.0.0.2", 00:26:39.750 "adrfam": "ipv4", 00:26:39.750 "trsvcid": "4420", 00:26:39.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:39.750 "hdgst": false, 00:26:39.750 "ddgst": false 00:26:39.750 }, 00:26:39.750 "method": "bdev_nvme_attach_controller" 00:26:39.750 },{ 00:26:39.750 "params": { 00:26:39.750 "name": "Nvme2", 00:26:39.750 "trtype": "tcp", 00:26:39.750 "traddr": "10.0.0.2", 00:26:39.750 "adrfam": "ipv4", 00:26:39.750 "trsvcid": "4420", 00:26:39.750 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:39.750 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:39.750 "hdgst": false, 00:26:39.750 "ddgst": false 00:26:39.750 }, 00:26:39.750 "method": "bdev_nvme_attach_controller" 00:26:39.750 },{ 00:26:39.750 "params": { 00:26:39.750 "name": "Nvme3", 00:26:39.750 "trtype": "tcp", 00:26:39.750 "traddr": "10.0.0.2", 00:26:39.750 "adrfam": "ipv4", 00:26:39.750 "trsvcid": "4420", 00:26:39.750 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:39.750 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:39.750 "hdgst": false, 00:26:39.750 "ddgst": false 00:26:39.750 }, 00:26:39.750 "method": "bdev_nvme_attach_controller" 00:26:39.750 },{ 00:26:39.750 "params": { 00:26:39.750 "name": "Nvme4", 00:26:39.751 "trtype": "tcp", 00:26:39.751 "traddr": "10.0.0.2", 00:26:39.751 "adrfam": "ipv4", 00:26:39.751 "trsvcid": "4420", 00:26:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:39.751 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:39.751 "hdgst": false, 00:26:39.751 "ddgst": false 00:26:39.751 }, 00:26:39.751 "method": "bdev_nvme_attach_controller" 00:26:39.751 },{ 00:26:39.751 "params": { 00:26:39.751 "name": "Nvme5", 00:26:39.751 "trtype": "tcp", 00:26:39.751 "traddr": "10.0.0.2", 00:26:39.751 "adrfam": "ipv4", 00:26:39.751 "trsvcid": "4420", 00:26:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:39.751 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:39.751 "hdgst": false, 00:26:39.751 "ddgst": false 00:26:39.751 }, 00:26:39.751 "method": "bdev_nvme_attach_controller" 00:26:39.751 },{ 00:26:39.751 "params": { 00:26:39.751 "name": "Nvme6", 00:26:39.751 "trtype": "tcp", 00:26:39.751 "traddr": "10.0.0.2", 00:26:39.751 "adrfam": "ipv4", 00:26:39.751 "trsvcid": "4420", 00:26:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:39.751 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:39.751 "hdgst": false, 00:26:39.751 "ddgst": false 00:26:39.751 }, 00:26:39.751 "method": "bdev_nvme_attach_controller" 00:26:39.751 },{ 00:26:39.751 "params": { 00:26:39.751 "name": "Nvme7", 00:26:39.751 "trtype": "tcp", 00:26:39.751 "traddr": "10.0.0.2", 00:26:39.751 "adrfam": "ipv4", 00:26:39.751 "trsvcid": "4420", 00:26:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:39.751 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:39.751 "hdgst": false, 00:26:39.751 "ddgst": false 00:26:39.751 }, 00:26:39.751 "method": "bdev_nvme_attach_controller" 00:26:39.751 },{ 00:26:39.751 "params": { 00:26:39.751 "name": "Nvme8", 00:26:39.751 "trtype": "tcp", 00:26:39.751 "traddr": "10.0.0.2", 00:26:39.751 "adrfam": "ipv4", 00:26:39.751 "trsvcid": "4420", 00:26:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:39.751 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:39.751 "hdgst": false, 00:26:39.751 "ddgst": false 00:26:39.751 }, 00:26:39.751 "method": "bdev_nvme_attach_controller" 00:26:39.751 },{ 00:26:39.751 "params": { 00:26:39.751 "name": "Nvme9", 00:26:39.751 "trtype": "tcp", 00:26:39.751 "traddr": "10.0.0.2", 00:26:39.751 "adrfam": "ipv4", 00:26:39.751 "trsvcid": "4420", 00:26:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:39.751 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:39.751 "hdgst": false, 00:26:39.751 "ddgst": false 00:26:39.751 }, 00:26:39.751 "method": "bdev_nvme_attach_controller" 00:26:39.751 },{ 00:26:39.751 "params": { 00:26:39.751 "name": "Nvme10", 00:26:39.751 "trtype": "tcp", 00:26:39.751 "traddr": "10.0.0.2", 00:26:39.751 "adrfam": "ipv4", 00:26:39.751 "trsvcid": "4420", 00:26:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:39.751 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:39.751 "hdgst": false, 00:26:39.751 "ddgst": false 00:26:39.751 }, 00:26:39.751 "method": "bdev_nvme_attach_controller" 00:26:39.751 }' 00:26:39.751 [2024-11-20 06:36:59.844894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.751 [2024-11-20 06:36:59.881528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.132 Running I/O for 10 seconds... 00:26:41.132 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:41.132 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:26:41.132 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:41.132 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.132 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:41.392 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.393 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:41.393 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:41.393 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:41.653 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:41.653 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:41.653 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:41.653 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:41.653 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.653 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:41.653 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.653 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:41.653 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:41.653 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2905530 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2905530 ']' 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2905530 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:41.914 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2905530 00:26:42.189 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:42.189 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:42.189 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2905530' 00:26:42.189 killing process with pid 2905530 00:26:42.190 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 2905530 00:26:42.190 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 2905530 00:26:42.190 [2024-11-20 06:37:02.199176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.190 [2024-11-20 06:37:02.199532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.199536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.199541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.199546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.199551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f80 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.200774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21478a0 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.191 [2024-11-20 06:37:02.202149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.202379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147d70 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.192 [2024-11-20 06:37:02.203515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.203657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148260 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.193 [2024-11-20 06:37:02.204788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.204793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.204797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.204802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.204806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.204811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.204816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.204821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.204825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148c00 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.194 [2024-11-20 06:37:02.205877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.205882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.205887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.205891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.205897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.205902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.205907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.205912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21490d0 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.206866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.216789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149a90 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.218105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.195 [2024-11-20 06:37:02.218142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.195 [2024-11-20 06:37:02.218154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.195 [2024-11-20 06:37:02.218170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.195 [2024-11-20 06:37:02.218179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.195 [2024-11-20 06:37:02.218187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.195 [2024-11-20 06:37:02.218195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.195 [2024-11-20 06:37:02.218203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.195 [2024-11-20 06:37:02.218211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11df610 is same with the state(6) to be set 00:26:42.195 [2024-11-20 06:37:02.218243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1721a20 is same with the state(6) to be set 00:26:42.196 [2024-11-20 06:37:02.218339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171f450 is same with the state(6) to be set 00:26:42.196 [2024-11-20 06:37:02.218429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735c90 is same with the state(6) to be set 00:26:42.196 [2024-11-20 06:37:02.218518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7850 is same with the state(6) to be set 00:26:42.196 [2024-11-20 06:37:02.218607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c5fc0 is same with the state(6) to be set 00:26:42.196 [2024-11-20 06:37:02.218695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8ba0 is same with the state(6) to be set 00:26:42.196 [2024-11-20 06:37:02.218782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2e00 is same with the state(6) to be set 00:26:42.196 [2024-11-20 06:37:02.218876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c5790 is same with the state(6) to be set 00:26:42.196 [2024-11-20 06:37:02.218962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.218988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.218996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.196 [2024-11-20 06:37:02.219003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.196 [2024-11-20 06:37:02.219012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.197 [2024-11-20 06:37:02.219019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7cb0 is same with the state(6) to be set 00:26:42.197 [2024-11-20 06:37:02.219772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.219989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.219999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.197 [2024-11-20 06:37:02.220405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.197 [2024-11-20 06:37:02.220416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.220917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.220942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:42.198 [2024-11-20 06:37:02.221113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.221127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.221139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.221147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.221164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.221172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.221181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.221189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.221199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.221206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.221216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.221223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.221233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.221241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.221250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.221257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.221267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.221274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.221284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.198 [2024-11-20 06:37:02.221294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.198 [2024-11-20 06:37:02.221304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.199 [2024-11-20 06:37:02.221827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.199 [2024-11-20 06:37:02.221834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.221843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.221851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.221860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.221868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.221877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.221885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.221895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.221902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.221912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.221919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.221928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.221936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.221946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.221953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.221964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.221972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.221981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.221988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.221998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.222005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c9270 is same with the state(6) to be set 00:26:42.200 [2024-11-20 06:37:02.229857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.229986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.229996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.230004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.230013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.230026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.230036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.230044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.230054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.230061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.230072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.230080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.230089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.230097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.230107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.230114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.230124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.200 [2024-11-20 06:37:02.230132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.200 [2024-11-20 06:37:02.230142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.201 [2024-11-20 06:37:02.230846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.201 [2024-11-20 06:37:02.230853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.230863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.230871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.230881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.230889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.230899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.230907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.230917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.230926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.230936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.230943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.230953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.230961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.230970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.230978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.230987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.230995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11df610 (9): Bad file descriptor 00:26:42.202 [2024-11-20 06:37:02.231255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1721a20 (9): Bad file descriptor 00:26:42.202 [2024-11-20 06:37:02.231269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171f450 (9): Bad file descriptor 00:26:42.202 [2024-11-20 06:37:02.231284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1735c90 (9): Bad file descriptor 00:26:42.202 [2024-11-20 06:37:02.231297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c7850 (9): Bad file descriptor 00:26:42.202 [2024-11-20 06:37:02.231311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c5fc0 (9): Bad file descriptor 00:26:42.202 [2024-11-20 06:37:02.231329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e8ba0 (9): Bad file descriptor 00:26:42.202 [2024-11-20 06:37:02.231346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2e00 (9): Bad file descriptor 00:26:42.202 [2024-11-20 06:37:02.231360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c5790 (9): Bad file descriptor 00:26:42.202 [2024-11-20 06:37:02.231377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c7cb0 (9): Bad file descriptor 00:26:42.202 [2024-11-20 06:37:02.231463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.202 [2024-11-20 06:37:02.231914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.202 [2024-11-20 06:37:02.231921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.231931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.231938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.231948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.231957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.231967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.231975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.231985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.231992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.232585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.203 [2024-11-20 06:37:02.232593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.203 [2024-11-20 06:37:02.236505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:42.203 [2024-11-20 06:37:02.238014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:42.203 [2024-11-20 06:37:02.238042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:42.204 [2024-11-20 06:37:02.238455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.204 [2024-11-20 06:37:02.238497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c7850 with addr=10.0.0.2, port=4420 00:26:42.204 [2024-11-20 06:37:02.238512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7850 is same with the state(6) to be set 00:26:42.204 [2024-11-20 06:37:02.239211] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:42.204 [2024-11-20 06:37:02.239261] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:42.204 [2024-11-20 06:37:02.239560] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:42.204 [2024-11-20 06:37:02.239600] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:42.204 [2024-11-20 06:37:02.239616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:42.204 [2024-11-20 06:37:02.239940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.204 [2024-11-20 06:37:02.239958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2e00 with addr=10.0.0.2, port=4420 00:26:42.204 [2024-11-20 06:37:02.239966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2e00 is same with the state(6) to be set 00:26:42.204 [2024-11-20 06:37:02.240201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.204 [2024-11-20 06:37:02.240225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c7cb0 with addr=10.0.0.2, port=4420 00:26:42.204 [2024-11-20 06:37:02.240233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7cb0 is same with the state(6) to be set 00:26:42.204 [2024-11-20 06:37:02.240246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c7850 (9): Bad file descriptor 00:26:42.204 [2024-11-20 06:37:02.240567] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:42.204 [2024-11-20 06:37:02.240608] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:42.204 [2024-11-20 06:37:02.240881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.204 [2024-11-20 06:37:02.240896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1735c90 with addr=10.0.0.2, port=4420 00:26:42.204 [2024-11-20 06:37:02.240903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735c90 is same with the state(6) to be set 00:26:42.204 [2024-11-20 06:37:02.240913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2e00 (9): Bad file descriptor 00:26:42.204 [2024-11-20 06:37:02.240923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c7cb0 (9): Bad file descriptor 00:26:42.204 [2024-11-20 06:37:02.240932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:42.204 [2024-11-20 06:37:02.240940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:42.204 [2024-11-20 06:37:02.240949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:42.204 [2024-11-20 06:37:02.240959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:42.204 [2024-11-20 06:37:02.241046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1735c90 (9): Bad file descriptor 00:26:42.204 [2024-11-20 06:37:02.241059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:42.204 [2024-11-20 06:37:02.241067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:42.204 [2024-11-20 06:37:02.241074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:42.204 [2024-11-20 06:37:02.241081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:42.204 [2024-11-20 06:37:02.241094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:42.204 [2024-11-20 06:37:02.241101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:42.204 [2024-11-20 06:37:02.241108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:42.204 [2024-11-20 06:37:02.241115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:42.204 [2024-11-20 06:37:02.241155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:42.204 [2024-11-20 06:37:02.241169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:42.204 [2024-11-20 06:37:02.241176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:42.204 [2024-11-20 06:37:02.241183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:42.204 [2024-11-20 06:37:02.241344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.204 [2024-11-20 06:37:02.241654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.204 [2024-11-20 06:37:02.241664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.241990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.241998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.205 [2024-11-20 06:37:02.242365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.205 [2024-11-20 06:37:02.242375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.242383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.242393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.242401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.242411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.242419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.242434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.242442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.242452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.242460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.242470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.242477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.242487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.242495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.242505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.242513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.242522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cc950 is same with the state(6) to be set 00:26:42.206 [2024-11-20 06:37:02.243815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.243831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.243844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.243854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.243866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.243875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.243886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.243895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.243905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.243913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.243923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.243931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.243942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.243949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.243963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.243971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.243981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.243989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.243999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.206 [2024-11-20 06:37:02.244377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.206 [2024-11-20 06:37:02.244385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.244981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.244990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7cd0 is same with the state(6) to be set 00:26:42.207 [2024-11-20 06:37:02.246261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.246274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.246287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.246297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.246308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.246318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.246330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.246340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.246351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.246360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.207 [2024-11-20 06:37:02.246369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.207 [2024-11-20 06:37:02.246380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.208 [2024-11-20 06:37:02.246986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.208 [2024-11-20 06:37:02.246996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.247425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.247433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ca810 is same with the state(6) to be set 00:26:42.209 [2024-11-20 06:37:02.248713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.209 [2024-11-20 06:37:02.248984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.209 [2024-11-20 06:37:02.248994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.210 [2024-11-20 06:37:02.249700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.210 [2024-11-20 06:37:02.249708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.249718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.249726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.249736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.249744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.249753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.249761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.249771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.249779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.249789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.249797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.249807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.249815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.249825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.249833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.249843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.249851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.249861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.249869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.249877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cbdb0 is same with the state(6) to be set 00:26:42.211 [2024-11-20 06:37:02.251145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.211 [2024-11-20 06:37:02.251670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.211 [2024-11-20 06:37:02.251679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.251988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.251997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.252302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.252311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ce8a0 is same with the state(6) to be set 00:26:42.212 [2024-11-20 06:37:02.253590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.253607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.253620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.253629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.253641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.253650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.253662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.212 [2024-11-20 06:37:02.253673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.212 [2024-11-20 06:37:02.253684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.253987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.253996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.213 [2024-11-20 06:37:02.254299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.213 [2024-11-20 06:37:02.254307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.214 [2024-11-20 06:37:02.254755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.214 [2024-11-20 06:37:02.254764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15080c0 is same with the state(6) to be set 00:26:42.214 [2024-11-20 06:37:02.256282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:42.214 [2024-11-20 06:37:02.256309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:42.214 [2024-11-20 06:37:02.256321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:42.214 [2024-11-20 06:37:02.256331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:42.214 [2024-11-20 06:37:02.256417] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:42.214 [2024-11-20 06:37:02.256433] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:42.214 [2024-11-20 06:37:02.256506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:42.214 task offset: 24576 on job bdev=Nvme2n1 fails 00:26:42.214 00:26:42.214 Latency(us) 00:26:42.214 [2024-11-20T05:37:02.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.214 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.214 Job: Nvme1n1 ended in about 0.96 seconds with error 00:26:42.214 Verification LBA range: start 0x0 length 0x400 00:26:42.214 Nvme1n1 : 0.96 199.45 12.47 66.48 0.00 238027.52 17367.04 232434.35 00:26:42.214 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.214 Job: Nvme2n1 ended in about 0.96 seconds with error 00:26:42.214 Verification LBA range: start 0x0 length 0x400 00:26:42.214 Nvme2n1 : 0.96 200.27 12.52 66.76 0.00 232299.95 18022.40 246415.36 00:26:42.214 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.214 Job: Nvme3n1 ended in about 0.97 seconds with error 00:26:42.214 Verification LBA range: start 0x0 length 0x400 00:26:42.214 Nvme3n1 : 0.97 203.38 12.71 66.07 0.00 225608.94 16384.00 242920.11 00:26:42.214 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.214 Job: Nvme4n1 ended in about 0.97 seconds with error 00:26:42.214 Verification LBA range: start 0x0 length 0x400 00:26:42.214 Nvme4n1 : 0.97 197.72 12.36 65.91 0.00 225888.64 14854.83 241172.48 00:26:42.214 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.214 Job: Nvme5n1 ended in about 0.96 seconds with error 00:26:42.214 Verification LBA range: start 0x0 length 0x400 00:26:42.214 Nvme5n1 : 0.96 200.00 12.50 66.67 0.00 218398.51 16711.68 270882.13 00:26:42.214 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.214 Job: Nvme6n1 ended in about 0.97 seconds with error 00:26:42.214 Verification LBA range: start 0x0 length 0x400 00:26:42.214 Nvme6n1 : 0.97 131.48 8.22 65.74 0.00 289470.01 16711.68 260396.37 00:26:42.214 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.214 Job: Nvme7n1 ended in about 0.98 seconds with error 00:26:42.214 Verification LBA range: start 0x0 length 0x400 00:26:42.214 Nvme7n1 : 0.98 200.83 12.55 65.58 0.00 209663.53 9175.04 253405.87 00:26:42.214 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.214 Job: Nvme8n1 ended in about 0.96 seconds with error 00:26:42.214 Verification LBA range: start 0x0 length 0x400 00:26:42.214 Nvme8n1 : 0.96 199.73 12.48 66.58 0.00 204521.60 17476.27 242920.11 00:26:42.214 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.214 Job: Nvme9n1 ended in about 0.98 seconds with error 00:26:42.214 Verification LBA range: start 0x0 length 0x400 00:26:42.215 Nvme9n1 : 0.98 130.83 8.18 65.41 0.00 272311.75 18131.63 249910.61 00:26:42.215 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:42.215 Job: Nvme10n1 ended in about 0.98 seconds with error 00:26:42.215 Verification LBA range: start 0x0 length 0x400 00:26:42.215 Nvme10n1 : 0.98 130.50 8.16 65.25 0.00 266914.70 19660.80 269134.51 00:26:42.215 [2024-11-20T05:37:02.494Z] =================================================================================================================== 00:26:42.215 [2024-11-20T05:37:02.494Z] Total : 1794.21 112.14 660.45 0.00 235172.50 9175.04 270882.13 00:26:42.215 [2024-11-20 06:37:02.280276] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:42.215 [2024-11-20 06:37:02.280308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:42.215 [2024-11-20 06:37:02.280691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.215 [2024-11-20 06:37:02.280709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c5790 with addr=10.0.0.2, port=4420 00:26:42.215 [2024-11-20 06:37:02.280719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c5790 is same with the state(6) to be set 00:26:42.215 [2024-11-20 06:37:02.281033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.215 [2024-11-20 06:37:02.281044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c5fc0 with addr=10.0.0.2, port=4420 00:26:42.215 [2024-11-20 06:37:02.281051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c5fc0 is same with the state(6) to be set 00:26:42.215 [2024-11-20 06:37:02.281330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.215 [2024-11-20 06:37:02.281342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e8ba0 with addr=10.0.0.2, port=4420 00:26:42.215 [2024-11-20 06:37:02.281349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8ba0 is same with the state(6) to be set 00:26:42.215 [2024-11-20 06:37:02.281689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.215 [2024-11-20 06:37:02.281700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11df610 with addr=10.0.0.2, port=4420 00:26:42.215 [2024-11-20 06:37:02.281707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11df610 is same with the state(6) to be set 00:26:42.215 [2024-11-20 06:37:02.283310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:42.215 [2024-11-20 06:37:02.283325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:42.215 [2024-11-20 06:37:02.283336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:42.215 [2024-11-20 06:37:02.283345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:42.215 [2024-11-20 06:37:02.283709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.215 [2024-11-20 06:37:02.283723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1721a20 with addr=10.0.0.2, port=4420 00:26:42.215 [2024-11-20 06:37:02.283730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1721a20 is same with the state(6) to be set 00:26:42.215 [2024-11-20 06:37:02.283945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.215 [2024-11-20 06:37:02.283956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171f450 with addr=10.0.0.2, port=4420 00:26:42.215 [2024-11-20 06:37:02.283964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171f450 is same with the state(6) to be set 00:26:42.215 [2024-11-20 06:37:02.283980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c5790 (9): Bad file descriptor 00:26:42.215 [2024-11-20 06:37:02.283992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c5fc0 (9): Bad file descriptor 00:26:42.215 [2024-11-20 06:37:02.284002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e8ba0 (9): Bad file descriptor 00:26:42.215 [2024-11-20 06:37:02.284011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11df610 (9): Bad file descriptor 00:26:42.215 [2024-11-20 06:37:02.284043] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:42.215 [2024-11-20 06:37:02.284055] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:42.215 [2024-11-20 06:37:02.284066] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:26:42.215 [2024-11-20 06:37:02.284078] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:42.215 [2024-11-20 06:37:02.284703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.215 [2024-11-20 06:37:02.284720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c7850 with addr=10.0.0.2, port=4420 00:26:42.215 [2024-11-20 06:37:02.284728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7850 is same with the state(6) to be set 00:26:42.215 [2024-11-20 06:37:02.285073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.215 [2024-11-20 06:37:02.285084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c7cb0 with addr=10.0.0.2, port=4420 00:26:42.215 [2024-11-20 06:37:02.285091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7cb0 is same with the state(6) to be set 00:26:42.215 [2024-11-20 06:37:02.285404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.215 [2024-11-20 06:37:02.285416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2e00 with addr=10.0.0.2, port=4420 00:26:42.215 [2024-11-20 06:37:02.285423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2e00 is same with the state(6) to be set 00:26:42.215 [2024-11-20 06:37:02.285723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.215 [2024-11-20 06:37:02.285734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1735c90 with addr=10.0.0.2, port=4420 00:26:42.215 [2024-11-20 06:37:02.285742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735c90 is same with the state(6) to be set 00:26:42.215 [2024-11-20 06:37:02.285753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1721a20 (9): Bad file descriptor 00:26:42.215 [2024-11-20 06:37:02.285763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171f450 (9): Bad file descriptor 00:26:42.215 [2024-11-20 06:37:02.285773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:42.215 [2024-11-20 06:37:02.285780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:42.215 [2024-11-20 06:37:02.285789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:42.215 [2024-11-20 06:37:02.285799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:42.215 [2024-11-20 06:37:02.285808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:42.215 [2024-11-20 06:37:02.285815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:42.215 [2024-11-20 06:37:02.285826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:42.215 [2024-11-20 06:37:02.285833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:42.215 [2024-11-20 06:37:02.285841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:42.215 [2024-11-20 06:37:02.285848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:42.215 [2024-11-20 06:37:02.285855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:42.215 [2024-11-20 06:37:02.285861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:42.215 [2024-11-20 06:37:02.285869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:42.215 [2024-11-20 06:37:02.285876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:42.215 [2024-11-20 06:37:02.285883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:42.215 [2024-11-20 06:37:02.285890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:42.215 [2024-11-20 06:37:02.285967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c7850 (9): Bad file descriptor 00:26:42.215 [2024-11-20 06:37:02.285979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c7cb0 (9): Bad file descriptor 00:26:42.215 [2024-11-20 06:37:02.285989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2e00 (9): Bad file descriptor 00:26:42.215 [2024-11-20 06:37:02.285999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1735c90 (9): Bad file descriptor 00:26:42.215 [2024-11-20 06:37:02.286008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:42.215 [2024-11-20 06:37:02.286015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:42.215 [2024-11-20 06:37:02.286022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:42.215 [2024-11-20 06:37:02.286029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:42.215 [2024-11-20 06:37:02.286037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:42.215 [2024-11-20 06:37:02.286044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:42.215 [2024-11-20 06:37:02.286051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:42.215 [2024-11-20 06:37:02.286059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:42.215 [2024-11-20 06:37:02.286086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:42.215 [2024-11-20 06:37:02.286094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:42.215 [2024-11-20 06:37:02.286101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:42.216 [2024-11-20 06:37:02.286108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:42.216 [2024-11-20 06:37:02.286116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:42.216 [2024-11-20 06:37:02.286123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:42.216 [2024-11-20 06:37:02.286133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:42.216 [2024-11-20 06:37:02.286140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:42.216 [2024-11-20 06:37:02.286147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:42.216 [2024-11-20 06:37:02.286154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:42.216 [2024-11-20 06:37:02.286166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:42.216 [2024-11-20 06:37:02.286173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:42.216 [2024-11-20 06:37:02.286180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:42.216 [2024-11-20 06:37:02.286187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:42.216 [2024-11-20 06:37:02.286194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:42.216 [2024-11-20 06:37:02.286201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:42.478 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2905752 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2905752 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2905752 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:43.421 rmmod nvme_tcp 00:26:43.421 rmmod nvme_fabrics 00:26:43.421 rmmod nvme_keyring 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2905530 ']' 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2905530 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2905530 ']' 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2905530 00:26:43.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2905530) - No such process 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2905530 is not found' 00:26:43.421 Process with pid 2905530 is not found 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.421 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:45.970 00:26:45.970 real 0m7.671s 00:26:45.970 user 0m18.516s 00:26:45.970 sys 0m1.286s 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:45.970 ************************************ 00:26:45.970 END TEST nvmf_shutdown_tc3 00:26:45.970 ************************************ 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:45.970 ************************************ 00:26:45.970 START TEST nvmf_shutdown_tc4 00:26:45.970 ************************************ 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:45.970 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:45.970 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:45.970 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.970 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:45.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.971 06:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:45.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:26:45.971 00:26:45.971 --- 10.0.0.2 ping statistics --- 00:26:45.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.971 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:26:45.971 00:26:45.971 --- 10.0.0.1 ping statistics --- 00:26:45.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.971 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2907099 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2907099 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 2907099 ']' 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:45.971 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:45.971 [2024-11-20 06:37:06.137550] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:26:45.971 [2024-11-20 06:37:06.137612] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.971 [2024-11-20 06:37:06.234724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.232 [2024-11-20 06:37:06.269014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.232 [2024-11-20 06:37:06.269045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.232 [2024-11-20 06:37:06.269051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.232 [2024-11-20 06:37:06.269056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.232 [2024-11-20 06:37:06.269060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.232 [2024-11-20 06:37:06.270286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.232 [2024-11-20 06:37:06.270441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.232 [2024-11-20 06:37:06.270587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.232 [2024-11-20 06:37:06.270587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:46.804 [2024-11-20 06:37:06.990399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:46.804 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.804 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:46.804 Malloc1 00:26:47.064 [2024-11-20 06:37:07.098915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.064 Malloc2 00:26:47.064 Malloc3 00:26:47.064 Malloc4 00:26:47.064 Malloc5 00:26:47.064 Malloc6 00:26:47.064 Malloc7 00:26:47.324 Malloc8 00:26:47.324 Malloc9 00:26:47.324 Malloc10 00:26:47.324 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.324 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:47.324 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:47.324 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:47.324 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2907483 00:26:47.324 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:47.324 06:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:26:47.324 [2024-11-20 06:37:07.578605] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2907099 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2907099 ']' 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2907099 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2907099 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2907099' 00:26:52.703 killing process with pid 2907099 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 2907099 00:26:52.703 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 2907099 00:26:52.703 [2024-11-20 06:37:12.572062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72380 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72380 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72380 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72380 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72d20 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72d20 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72d20 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72d20 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72d20 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72d20 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72d20 is same with the state(6) to be set 00:26:52.703 [2024-11-20 06:37:12.572954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd71eb0 is same with the state(6) to be set 00:26:52.703 Write completed with error (sct=0, sc=8) 00:26:52.703 Write completed with error (sct=0, sc=8) 00:26:52.703 starting I/O failed: -6 00:26:52.703 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 [2024-11-20 06:37:12.574083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.704 starting I/O failed: -6 00:26:52.704 starting I/O failed: -6 00:26:52.704 starting I/O failed: -6 00:26:52.704 starting I/O failed: -6 00:26:52.704 starting I/O failed: -6 00:26:52.704 starting I/O failed: -6 00:26:52.704 starting I/O failed: -6 00:26:52.704 NVMe io qpair process completion error 00:26:52.704 [2024-11-20 06:37:12.575647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd736e0 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.575665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd736e0 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.575675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd736e0 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.575680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd736e0 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.575685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd736e0 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.575690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd736e0 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.575694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd736e0 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.575974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd73bd0 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.575997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd73bd0 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.576294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd73210 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.576314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd73210 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.576320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd73210 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.576325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd73210 is same with the state(6) to be set 00:26:52.704 [2024-11-20 06:37:12.576330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd73210 is same with the state(6) to be set 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 starting I/O failed: -6 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.704 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 [2024-11-20 06:37:12.579461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.705 NVMe io qpair process completion error 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 [2024-11-20 06:37:12.580585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.705 starting I/O failed: -6 00:26:52.705 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 [2024-11-20 06:37:12.581408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 [2024-11-20 06:37:12.582322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.706 starting I/O failed: -6 00:26:52.706 [2024-11-20 06:37:12.583794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.706 NVMe io qpair process completion error 00:26:52.706 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 [2024-11-20 06:37:12.584871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.707 starting I/O failed: -6 00:26:52.707 starting I/O failed: -6 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 [2024-11-20 06:37:12.585880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 [2024-11-20 06:37:12.586825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.707 Write completed with error (sct=0, sc=8) 00:26:52.707 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 [2024-11-20 06:37:12.589055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.708 NVMe io qpair process completion error 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 [2024-11-20 06:37:12.590203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 Write completed with error (sct=0, sc=8) 00:26:52.708 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 [2024-11-20 06:37:12.591015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 [2024-11-20 06:37:12.591978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.709 Write completed with error (sct=0, sc=8) 00:26:52.709 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 [2024-11-20 06:37:12.593730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.710 NVMe io qpair process completion error 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 [2024-11-20 06:37:12.594870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 [2024-11-20 06:37:12.595694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 starting I/O failed: -6 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.710 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 [2024-11-20 06:37:12.596641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 [2024-11-20 06:37:12.598320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.711 NVMe io qpair process completion error 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 [2024-11-20 06:37:12.599475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.711 starting I/O failed: -6 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 starting I/O failed: -6 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.711 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 [2024-11-20 06:37:12.600442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 [2024-11-20 06:37:12.601382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.712 starting I/O failed: -6 00:26:52.712 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 [2024-11-20 06:37:12.604142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.713 NVMe io qpair process completion error 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 [2024-11-20 06:37:12.605297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.713 starting I/O failed: -6 00:26:52.713 starting I/O failed: -6 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 [2024-11-20 06:37:12.606229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.713 Write completed with error (sct=0, sc=8) 00:26:52.713 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 [2024-11-20 06:37:12.607134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 [2024-11-20 06:37:12.608978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.714 NVMe io qpair process completion error 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 Write completed with error (sct=0, sc=8) 00:26:52.714 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 [2024-11-20 06:37:12.610341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 [2024-11-20 06:37:12.611168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 [2024-11-20 06:37:12.612088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 [2024-11-20 06:37:12.614533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.716 NVMe io qpair process completion error 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 [2024-11-20 06:37:12.615765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 [2024-11-20 06:37:12.616603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.716 starting I/O failed: -6 00:26:52.716 starting I/O failed: -6 00:26:52.716 starting I/O failed: -6 00:26:52.716 starting I/O failed: -6 00:26:52.716 starting I/O failed: -6 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed: -6 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 [2024-11-20 06:37:12.617958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 starting I/O failed: -6 00:26:52.717 [2024-11-20 06:37:12.619549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.717 NVMe io qpair process completion error 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.717 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Write completed with error (sct=0, sc=8) 00:26:52.718 Initializing NVMe Controllers 00:26:52.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:26:52.718 Controller IO queue size 128, less than required. 00:26:52.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:26:52.718 Controller IO queue size 128, less than required. 00:26:52.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.718 Controller IO queue size 128, less than required. 00:26:52.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:26:52.718 Controller IO queue size 128, less than required. 00:26:52.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:26:52.718 Controller IO queue size 128, less than required. 00:26:52.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:26:52.718 Controller IO queue size 128, less than required. 00:26:52.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:26:52.718 Controller IO queue size 128, less than required. 00:26:52.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:26:52.718 Controller IO queue size 128, less than required. 00:26:52.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:26:52.718 Controller IO queue size 128, less than required. 00:26:52.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:26:52.718 Controller IO queue size 128, less than required. 00:26:52.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:52.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:52.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:52.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:52.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:52.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:52.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:52.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:52.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:52.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:52.718 Initialization complete. Launching workers. 00:26:52.718 ======================================================== 00:26:52.718 Latency(us) 00:26:52.718 Device Information : IOPS MiB/s Average min max 00:26:52.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1909.31 82.04 67059.01 714.17 128002.51 00:26:52.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1908.47 82.00 67127.23 692.07 149582.39 00:26:52.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1896.96 81.51 67355.94 456.87 150982.86 00:26:52.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1920.19 82.51 66356.86 768.22 118657.68 00:26:52.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1878.33 80.71 67566.41 736.07 118339.72 00:26:52.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1826.42 78.48 69507.34 691.94 119719.24 00:26:52.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1901.36 81.70 66797.39 671.51 120018.79 00:26:52.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1889.01 81.17 67256.39 824.52 121627.66 00:26:52.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1886.08 81.04 67385.30 706.42 121081.17 00:26:52.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1901.15 81.69 66888.63 731.57 126135.79 00:26:52.719 ======================================================== 00:26:52.719 Total : 18917.27 812.85 67319.98 456.87 150982.86 00:26:52.719 00:26:52.719 [2024-11-20 06:37:12.625587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8bc0 is same with the state(6) to be set 00:26:52.719 [2024-11-20 06:37:12.625633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9740 is same with the state(6) to be set 00:26:52.719 [2024-11-20 06:37:12.625664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5da720 is same with the state(6) to be set 00:26:52.719 [2024-11-20 06:37:12.625693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8560 is same with the state(6) to be set 00:26:52.719 [2024-11-20 06:37:12.625721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9410 is same with the state(6) to be set 00:26:52.719 [2024-11-20 06:37:12.625749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8890 is same with the state(6) to be set 00:26:52.719 [2024-11-20 06:37:12.625778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9a70 is same with the state(6) to be set 00:26:52.719 [2024-11-20 06:37:12.625806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5daae0 is same with the state(6) to be set 00:26:52.719 [2024-11-20 06:37:12.625835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8ef0 is same with the state(6) to be set 00:26:52.719 [2024-11-20 06:37:12.625864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5da900 is same with the state(6) to be set 00:26:52.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:52.719 06:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2907483 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2907483 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2907483 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.663 rmmod nvme_tcp 00:26:53.663 rmmod nvme_fabrics 00:26:53.663 rmmod nvme_keyring 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2907099 ']' 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2907099 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2907099 ']' 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2907099 00:26:53.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2907099) - No such process 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2907099 is not found' 00:26:53.663 Process with pid 2907099 is not found 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.663 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.213 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.213 00:26:56.213 real 0m10.267s 00:26:56.213 user 0m27.849s 00:26:56.214 sys 0m4.098s 00:26:56.214 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:56.214 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:56.214 ************************************ 00:26:56.214 END TEST nvmf_shutdown_tc4 00:26:56.214 ************************************ 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:56.214 00:26:56.214 real 0m43.930s 00:26:56.214 user 1m47.916s 00:26:56.214 sys 0m13.985s 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:56.214 ************************************ 00:26:56.214 END TEST nvmf_shutdown 00:26:56.214 ************************************ 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:56.214 ************************************ 00:26:56.214 START TEST nvmf_nsid 00:26:56.214 ************************************ 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:56.214 * Looking for test storage... 00:26:56.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.214 --rc genhtml_branch_coverage=1 00:26:56.214 --rc genhtml_function_coverage=1 00:26:56.214 --rc genhtml_legend=1 00:26:56.214 --rc geninfo_all_blocks=1 00:26:56.214 --rc geninfo_unexecuted_blocks=1 00:26:56.214 00:26:56.214 ' 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.214 --rc genhtml_branch_coverage=1 00:26:56.214 --rc genhtml_function_coverage=1 00:26:56.214 --rc genhtml_legend=1 00:26:56.214 --rc geninfo_all_blocks=1 00:26:56.214 --rc geninfo_unexecuted_blocks=1 00:26:56.214 00:26:56.214 ' 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.214 --rc genhtml_branch_coverage=1 00:26:56.214 --rc genhtml_function_coverage=1 00:26:56.214 --rc genhtml_legend=1 00:26:56.214 --rc geninfo_all_blocks=1 00:26:56.214 --rc geninfo_unexecuted_blocks=1 00:26:56.214 00:26:56.214 ' 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.214 --rc genhtml_branch_coverage=1 00:26:56.214 --rc genhtml_function_coverage=1 00:26:56.214 --rc genhtml_legend=1 00:26:56.214 --rc geninfo_all_blocks=1 00:26:56.214 --rc geninfo_unexecuted_blocks=1 00:26:56.214 00:26:56.214 ' 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.214 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.215 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:27:04.362 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:04.363 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:04.363 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:04.363 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:04.363 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:04.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:27:04.363 00:27:04.363 --- 10.0.0.2 ping statistics --- 00:27:04.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.363 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:27:04.363 00:27:04.363 --- 10.0.0.1 ping statistics --- 00:27:04.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.363 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:04.363 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2912842 00:27:04.364 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2912842 00:27:04.364 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:27:04.364 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2912842 ']' 00:27:04.364 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.364 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:04.364 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.364 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:04.364 06:37:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:04.364 [2024-11-20 06:37:23.943500] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:27:04.364 [2024-11-20 06:37:23.943567] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.364 [2024-11-20 06:37:24.041223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.364 [2024-11-20 06:37:24.091748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.364 [2024-11-20 06:37:24.091797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.364 [2024-11-20 06:37:24.091806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.364 [2024-11-20 06:37:24.091814] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.364 [2024-11-20 06:37:24.091821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.364 [2024-11-20 06:37:24.092643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2913057 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=7dd3d11f-0d80-40c1-9b83-d035897c1ebd 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ad0d328f-241e-4262-9d19-80086b4361d2 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c6331757-e1bc-4ffd-99b0-a4f64ecc61cc 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.625 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:04.625 null0 00:27:04.625 null1 00:27:04.625 [2024-11-20 06:37:24.860240] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:27:04.625 [2024-11-20 06:37:24.860307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913057 ] 00:27:04.625 null2 00:27:04.625 [2024-11-20 06:37:24.868931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.625 [2024-11-20 06:37:24.893247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.886 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.886 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2913057 /var/tmp/tgt2.sock 00:27:04.886 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2913057 ']' 00:27:04.886 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:27:04.886 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:04.886 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:27:04.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:27:04.886 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:04.886 06:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:04.886 [2024-11-20 06:37:24.952437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.886 [2024-11-20 06:37:25.005966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.147 06:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:05.147 06:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:27:05.147 06:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:27:05.408 [2024-11-20 06:37:25.567325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.408 [2024-11-20 06:37:25.583514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:27:05.408 nvme0n1 nvme0n2 00:27:05.408 nvme1n1 00:27:05.408 06:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:27:05.408 06:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:27:05.408 06:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:06.796 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:27:06.797 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:27:06.797 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:27:06.797 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:27:06.797 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:27:06.797 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:27:06.797 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:27:06.797 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:27:06.797 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:06.797 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:27:07.058 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:27:07.058 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:27:07.058 06:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 7dd3d11f-0d80-40c1-9b83-d035897c1ebd 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7dd3d11f0d8040c19b83d035897c1ebd 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7DD3D11F0D8040C19B83D035897C1EBD 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 7DD3D11F0D8040C19B83D035897C1EBD == \7\D\D\3\D\1\1\F\0\D\8\0\4\0\C\1\9\B\8\3\D\0\3\5\8\9\7\C\1\E\B\D ]] 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ad0d328f-241e-4262-9d19-80086b4361d2 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ad0d328f241e42629d1980086b4361d2 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AD0D328F241E42629D1980086B4361D2 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ AD0D328F241E42629D1980086B4361D2 == \A\D\0\D\3\2\8\F\2\4\1\E\4\2\6\2\9\D\1\9\8\0\0\8\6\B\4\3\6\1\D\2 ]] 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:27:08.000 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c6331757-e1bc-4ffd-99b0-a4f64ecc61cc 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c6331757e1bc4ffd99b0a4f64ecc61cc 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C6331757E1BC4FFD99B0A4F64ECC61CC 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C6331757E1BC4FFD99B0A4F64ECC61CC == \C\6\3\3\1\7\5\7\E\1\B\C\4\F\F\D\9\9\B\0\A\4\F\6\4\E\C\C\6\1\C\C ]] 00:27:08.261 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2913057 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2913057 ']' 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2913057 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2913057 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2913057' 00:27:08.522 killing process with pid 2913057 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2913057 00:27:08.522 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2913057 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.782 rmmod nvme_tcp 00:27:08.782 rmmod nvme_fabrics 00:27:08.782 rmmod nvme_keyring 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2912842 ']' 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2912842 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2912842 ']' 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2912842 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2912842 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2912842' 00:27:08.782 killing process with pid 2912842 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2912842 00:27:08.782 06:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2912842 00:27:08.782 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.782 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.782 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.782 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:27:08.782 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:27:08.782 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.782 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:27:09.043 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:09.043 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:09.043 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.043 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.043 06:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.957 06:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.957 00:27:10.957 real 0m15.007s 00:27:10.957 user 0m11.482s 00:27:10.957 sys 0m6.938s 00:27:10.957 06:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:10.957 06:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:10.957 ************************************ 00:27:10.957 END TEST nvmf_nsid 00:27:10.957 ************************************ 00:27:10.957 06:37:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:10.957 00:27:10.957 real 13m4.285s 00:27:10.957 user 27m24.657s 00:27:10.957 sys 3m56.499s 00:27:10.957 06:37:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:10.957 06:37:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:10.957 ************************************ 00:27:10.957 END TEST nvmf_target_extra 00:27:10.957 ************************************ 00:27:10.957 06:37:31 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:10.957 06:37:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:10.957 06:37:31 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:10.957 06:37:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:11.218 ************************************ 00:27:11.218 START TEST nvmf_host 00:27:11.218 ************************************ 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:11.218 * Looking for test storage... 00:27:11.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:11.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.218 --rc genhtml_branch_coverage=1 00:27:11.218 --rc genhtml_function_coverage=1 00:27:11.218 --rc genhtml_legend=1 00:27:11.218 --rc geninfo_all_blocks=1 00:27:11.218 --rc geninfo_unexecuted_blocks=1 00:27:11.218 00:27:11.218 ' 00:27:11.218 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:11.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.218 --rc genhtml_branch_coverage=1 00:27:11.218 --rc genhtml_function_coverage=1 00:27:11.218 --rc genhtml_legend=1 00:27:11.218 --rc geninfo_all_blocks=1 00:27:11.219 --rc geninfo_unexecuted_blocks=1 00:27:11.219 00:27:11.219 ' 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:11.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.219 --rc genhtml_branch_coverage=1 00:27:11.219 --rc genhtml_function_coverage=1 00:27:11.219 --rc genhtml_legend=1 00:27:11.219 --rc geninfo_all_blocks=1 00:27:11.219 --rc geninfo_unexecuted_blocks=1 00:27:11.219 00:27:11.219 ' 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:11.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.219 --rc genhtml_branch_coverage=1 00:27:11.219 --rc genhtml_function_coverage=1 00:27:11.219 --rc genhtml_legend=1 00:27:11.219 --rc geninfo_all_blocks=1 00:27:11.219 --rc geninfo_unexecuted_blocks=1 00:27:11.219 00:27:11.219 ' 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:11.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:11.219 06:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.481 ************************************ 00:27:11.481 START TEST nvmf_multicontroller 00:27:11.481 ************************************ 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:11.481 * Looking for test storage... 00:27:11.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.481 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:11.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.482 --rc genhtml_branch_coverage=1 00:27:11.482 --rc genhtml_function_coverage=1 00:27:11.482 --rc genhtml_legend=1 00:27:11.482 --rc geninfo_all_blocks=1 00:27:11.482 --rc geninfo_unexecuted_blocks=1 00:27:11.482 00:27:11.482 ' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:11.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.482 --rc genhtml_branch_coverage=1 00:27:11.482 --rc genhtml_function_coverage=1 00:27:11.482 --rc genhtml_legend=1 00:27:11.482 --rc geninfo_all_blocks=1 00:27:11.482 --rc geninfo_unexecuted_blocks=1 00:27:11.482 00:27:11.482 ' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:11.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.482 --rc genhtml_branch_coverage=1 00:27:11.482 --rc genhtml_function_coverage=1 00:27:11.482 --rc genhtml_legend=1 00:27:11.482 --rc geninfo_all_blocks=1 00:27:11.482 --rc geninfo_unexecuted_blocks=1 00:27:11.482 00:27:11.482 ' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:11.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.482 --rc genhtml_branch_coverage=1 00:27:11.482 --rc genhtml_function_coverage=1 00:27:11.482 --rc genhtml_legend=1 00:27:11.482 --rc geninfo_all_blocks=1 00:27:11.482 --rc geninfo_unexecuted_blocks=1 00:27:11.482 00:27:11.482 ' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:11.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.482 06:37:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:19.630 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.630 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.630 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.630 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:19.631 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:19.631 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:19.631 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:19.631 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.631 06:37:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.631 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.631 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.631 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:19.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:27:19.632 00:27:19.632 --- 10.0.0.2 ping statistics --- 00:27:19.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.632 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:27:19.632 00:27:19.632 --- 10.0.0.1 ping statistics --- 00:27:19.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.632 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2918156 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2918156 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2918156 ']' 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:19.632 06:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:19.632 [2024-11-20 06:37:39.362309] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:27:19.632 [2024-11-20 06:37:39.362376] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.632 [2024-11-20 06:37:39.463196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:19.632 [2024-11-20 06:37:39.516231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.632 [2024-11-20 06:37:39.516283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.632 [2024-11-20 06:37:39.516291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.632 [2024-11-20 06:37:39.516298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.632 [2024-11-20 06:37:39.516305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.632 [2024-11-20 06:37:39.518092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:19.632 [2024-11-20 06:37:39.518252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.632 [2024-11-20 06:37:39.518253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 [2024-11-20 06:37:40.247848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 Malloc0 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 [2024-11-20 06:37:40.323317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 [2024-11-20 06:37:40.335208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 Malloc1 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2918328 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2918328 /var/tmp/bdevperf.sock 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2918328 ']' 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:20.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:20.207 06:37:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.151 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:21.151 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:27:21.151 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:21.151 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.151 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.413 NVMe0n1 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.413 1 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.413 request: 00:27:21.413 { 00:27:21.413 "name": "NVMe0", 00:27:21.413 "trtype": "tcp", 00:27:21.413 "traddr": "10.0.0.2", 00:27:21.413 "adrfam": "ipv4", 00:27:21.413 "trsvcid": "4420", 00:27:21.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.413 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:21.413 "hostaddr": "10.0.0.1", 00:27:21.413 "prchk_reftag": false, 00:27:21.413 "prchk_guard": false, 00:27:21.413 "hdgst": false, 00:27:21.413 "ddgst": false, 00:27:21.413 "allow_unrecognized_csi": false, 00:27:21.413 "method": "bdev_nvme_attach_controller", 00:27:21.413 "req_id": 1 00:27:21.413 } 00:27:21.413 Got JSON-RPC error response 00:27:21.413 response: 00:27:21.413 { 00:27:21.413 "code": -114, 00:27:21.413 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:21.413 } 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.413 request: 00:27:21.413 { 00:27:21.413 "name": "NVMe0", 00:27:21.413 "trtype": "tcp", 00:27:21.413 "traddr": "10.0.0.2", 00:27:21.413 "adrfam": "ipv4", 00:27:21.413 "trsvcid": "4420", 00:27:21.413 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:21.413 "hostaddr": "10.0.0.1", 00:27:21.413 "prchk_reftag": false, 00:27:21.413 "prchk_guard": false, 00:27:21.413 "hdgst": false, 00:27:21.413 "ddgst": false, 00:27:21.413 "allow_unrecognized_csi": false, 00:27:21.413 "method": "bdev_nvme_attach_controller", 00:27:21.413 "req_id": 1 00:27:21.413 } 00:27:21.413 Got JSON-RPC error response 00:27:21.413 response: 00:27:21.413 { 00:27:21.413 "code": -114, 00:27:21.413 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:21.413 } 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.413 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.413 request: 00:27:21.413 { 00:27:21.413 "name": "NVMe0", 00:27:21.413 "trtype": "tcp", 00:27:21.413 "traddr": "10.0.0.2", 00:27:21.413 "adrfam": "ipv4", 00:27:21.413 "trsvcid": "4420", 00:27:21.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.413 "hostaddr": "10.0.0.1", 00:27:21.413 "prchk_reftag": false, 00:27:21.413 "prchk_guard": false, 00:27:21.413 "hdgst": false, 00:27:21.413 "ddgst": false, 00:27:21.413 "multipath": "disable", 00:27:21.413 "allow_unrecognized_csi": false, 00:27:21.413 "method": "bdev_nvme_attach_controller", 00:27:21.413 "req_id": 1 00:27:21.413 } 00:27:21.413 Got JSON-RPC error response 00:27:21.413 response: 00:27:21.413 { 00:27:21.413 "code": -114, 00:27:21.414 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:27:21.414 } 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.414 request: 00:27:21.414 { 00:27:21.414 "name": "NVMe0", 00:27:21.414 "trtype": "tcp", 00:27:21.414 "traddr": "10.0.0.2", 00:27:21.414 "adrfam": "ipv4", 00:27:21.414 "trsvcid": "4420", 00:27:21.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.414 "hostaddr": "10.0.0.1", 00:27:21.414 "prchk_reftag": false, 00:27:21.414 "prchk_guard": false, 00:27:21.414 "hdgst": false, 00:27:21.414 "ddgst": false, 00:27:21.414 "multipath": "failover", 00:27:21.414 "allow_unrecognized_csi": false, 00:27:21.414 "method": "bdev_nvme_attach_controller", 00:27:21.414 "req_id": 1 00:27:21.414 } 00:27:21.414 Got JSON-RPC error response 00:27:21.414 response: 00:27:21.414 { 00:27:21.414 "code": -114, 00:27:21.414 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:21.414 } 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.414 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.675 NVMe0n1 00:27:21.675 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.675 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:21.675 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.675 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.675 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.675 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:21.675 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.675 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.937 00:27:21.937 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.937 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:21.937 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:21.937 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.937 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.937 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.937 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:21.937 06:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:22.880 { 00:27:22.880 "results": [ 00:27:22.880 { 00:27:22.880 "job": "NVMe0n1", 00:27:22.880 "core_mask": "0x1", 00:27:22.880 "workload": "write", 00:27:22.880 "status": "finished", 00:27:22.880 "queue_depth": 128, 00:27:22.880 "io_size": 4096, 00:27:22.880 "runtime": 1.005455, 00:27:22.880 "iops": 22798.63345450567, 00:27:22.880 "mibps": 89.05716193166278, 00:27:22.880 "io_failed": 0, 00:27:22.880 "io_timeout": 0, 00:27:22.880 "avg_latency_us": 5601.725372769708, 00:27:22.880 "min_latency_us": 2157.2266666666665, 00:27:22.880 "max_latency_us": 12069.546666666667 00:27:22.880 } 00:27:22.880 ], 00:27:22.880 "core_count": 1 00:27:22.880 } 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2918328 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2918328 ']' 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2918328 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:22.880 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2918328 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2918328' 00:27:23.143 killing process with pid 2918328 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2918328 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2918328 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:27:23.143 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:23.143 [2024-11-20 06:37:40.466077] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:27:23.143 [2024-11-20 06:37:40.466155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918328 ] 00:27:23.143 [2024-11-20 06:37:40.559986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.143 [2024-11-20 06:37:40.612287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.143 [2024-11-20 06:37:41.964717] bdev.c:4753:bdev_name_add: *ERROR*: Bdev name 11666bf9-9c26-4b37-97b1-3f2adcd9f0ac already exists 00:27:23.143 [2024-11-20 06:37:41.964764] bdev.c:7962:bdev_register: *ERROR*: Unable to add uuid:11666bf9-9c26-4b37-97b1-3f2adcd9f0ac alias for bdev NVMe1n1 00:27:23.143 [2024-11-20 06:37:41.964775] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:23.143 Running I/O for 1 seconds... 00:27:23.143 22732.00 IOPS, 88.80 MiB/s 00:27:23.143 Latency(us) 00:27:23.143 [2024-11-20T05:37:43.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.143 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:23.143 NVMe0n1 : 1.01 22798.63 89.06 0.00 0.00 5601.73 2157.23 12069.55 00:27:23.143 [2024-11-20T05:37:43.422Z] =================================================================================================================== 00:27:23.143 [2024-11-20T05:37:43.422Z] Total : 22798.63 89.06 0.00 0.00 5601.73 2157.23 12069.55 00:27:23.143 Received shutdown signal, test time was about 1.000000 seconds 00:27:23.143 00:27:23.143 Latency(us) 00:27:23.143 [2024-11-20T05:37:43.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.143 [2024-11-20T05:37:43.422Z] =================================================================================================================== 00:27:23.143 [2024-11-20T05:37:43.422Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:23.143 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:23.143 rmmod nvme_tcp 00:27:23.143 rmmod nvme_fabrics 00:27:23.143 rmmod nvme_keyring 00:27:23.143 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:23.404 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:27:23.404 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:27:23.404 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2918156 ']' 00:27:23.404 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2918156 00:27:23.404 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2918156 ']' 00:27:23.404 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2918156 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2918156 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2918156' 00:27:23.405 killing process with pid 2918156 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2918156 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2918156 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.405 06:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.950 00:27:25.950 real 0m14.211s 00:27:25.950 user 0m17.682s 00:27:25.950 sys 0m6.661s 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.950 ************************************ 00:27:25.950 END TEST nvmf_multicontroller 00:27:25.950 ************************************ 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.950 ************************************ 00:27:25.950 START TEST nvmf_aer 00:27:25.950 ************************************ 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:25.950 * Looking for test storage... 00:27:25.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:25.950 06:37:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:25.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.950 --rc genhtml_branch_coverage=1 00:27:25.950 --rc genhtml_function_coverage=1 00:27:25.950 --rc genhtml_legend=1 00:27:25.950 --rc geninfo_all_blocks=1 00:27:25.950 --rc geninfo_unexecuted_blocks=1 00:27:25.950 00:27:25.950 ' 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:25.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.950 --rc genhtml_branch_coverage=1 00:27:25.950 --rc genhtml_function_coverage=1 00:27:25.950 --rc genhtml_legend=1 00:27:25.950 --rc geninfo_all_blocks=1 00:27:25.950 --rc geninfo_unexecuted_blocks=1 00:27:25.950 00:27:25.950 ' 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:25.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.950 --rc genhtml_branch_coverage=1 00:27:25.950 --rc genhtml_function_coverage=1 00:27:25.950 --rc genhtml_legend=1 00:27:25.950 --rc geninfo_all_blocks=1 00:27:25.950 --rc geninfo_unexecuted_blocks=1 00:27:25.950 00:27:25.950 ' 00:27:25.950 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:25.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.951 --rc genhtml_branch_coverage=1 00:27:25.951 --rc genhtml_function_coverage=1 00:27:25.951 --rc genhtml_legend=1 00:27:25.951 --rc geninfo_all_blocks=1 00:27:25.951 --rc geninfo_unexecuted_blocks=1 00:27:25.951 00:27:25.951 ' 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:25.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:25.951 06:37:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:34.092 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:34.093 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:34.093 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:34.093 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:34.093 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:34.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:27:34.093 00:27:34.093 --- 10.0.0.2 ping statistics --- 00:27:34.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.093 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:27:34.093 00:27:34.093 --- 10.0.0.1 ping statistics --- 00:27:34.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.093 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2923088 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2923088 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 2923088 ']' 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:34.093 06:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.093 [2024-11-20 06:37:53.586600] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:27:34.093 [2024-11-20 06:37:53.586663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.093 [2024-11-20 06:37:53.687204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.093 [2024-11-20 06:37:53.741464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.094 [2024-11-20 06:37:53.741519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.094 [2024-11-20 06:37:53.741528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.094 [2024-11-20 06:37:53.741535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.094 [2024-11-20 06:37:53.741542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.094 [2024-11-20 06:37:53.743595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.094 [2024-11-20 06:37:53.743755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.094 [2024-11-20 06:37:53.743915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.094 [2024-11-20 06:37:53.743916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.355 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:34.355 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:27:34.355 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.355 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:34.355 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.355 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.355 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.355 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.355 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.355 [2024-11-20 06:37:54.461873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.355 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.356 Malloc0 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.356 [2024-11-20 06:37:54.536923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.356 [ 00:27:34.356 { 00:27:34.356 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:34.356 "subtype": "Discovery", 00:27:34.356 "listen_addresses": [], 00:27:34.356 "allow_any_host": true, 00:27:34.356 "hosts": [] 00:27:34.356 }, 00:27:34.356 { 00:27:34.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.356 "subtype": "NVMe", 00:27:34.356 "listen_addresses": [ 00:27:34.356 { 00:27:34.356 "trtype": "TCP", 00:27:34.356 "adrfam": "IPv4", 00:27:34.356 "traddr": "10.0.0.2", 00:27:34.356 "trsvcid": "4420" 00:27:34.356 } 00:27:34.356 ], 00:27:34.356 "allow_any_host": true, 00:27:34.356 "hosts": [], 00:27:34.356 "serial_number": "SPDK00000000000001", 00:27:34.356 "model_number": "SPDK bdev Controller", 00:27:34.356 "max_namespaces": 2, 00:27:34.356 "min_cntlid": 1, 00:27:34.356 "max_cntlid": 65519, 00:27:34.356 "namespaces": [ 00:27:34.356 { 00:27:34.356 "nsid": 1, 00:27:34.356 "bdev_name": "Malloc0", 00:27:34.356 "name": "Malloc0", 00:27:34.356 "nguid": "63704227C12D43E591F1D2F9355820F6", 00:27:34.356 "uuid": "63704227-c12d-43e5-91f1-d2f9355820f6" 00:27:34.356 } 00:27:34.356 ] 00:27:34.356 } 00:27:34.356 ] 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2923357 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:27:34.356 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:27:34.617 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:34.617 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:27:34.617 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:27:34.617 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:27:34.617 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:34.617 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:34.617 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:27:34.617 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.618 Malloc1 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.618 Asynchronous Event Request test 00:27:34.618 Attaching to 10.0.0.2 00:27:34.618 Attached to 10.0.0.2 00:27:34.618 Registering asynchronous event callbacks... 00:27:34.618 Starting namespace attribute notice tests for all controllers... 00:27:34.618 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:34.618 aer_cb - Changed Namespace 00:27:34.618 Cleaning up... 00:27:34.618 [ 00:27:34.618 { 00:27:34.618 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:34.618 "subtype": "Discovery", 00:27:34.618 "listen_addresses": [], 00:27:34.618 "allow_any_host": true, 00:27:34.618 "hosts": [] 00:27:34.618 }, 00:27:34.618 { 00:27:34.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.618 "subtype": "NVMe", 00:27:34.618 "listen_addresses": [ 00:27:34.618 { 00:27:34.618 "trtype": "TCP", 00:27:34.618 "adrfam": "IPv4", 00:27:34.618 "traddr": "10.0.0.2", 00:27:34.618 "trsvcid": "4420" 00:27:34.618 } 00:27:34.618 ], 00:27:34.618 "allow_any_host": true, 00:27:34.618 "hosts": [], 00:27:34.618 "serial_number": "SPDK00000000000001", 00:27:34.618 "model_number": "SPDK bdev Controller", 00:27:34.618 "max_namespaces": 2, 00:27:34.618 "min_cntlid": 1, 00:27:34.618 "max_cntlid": 65519, 00:27:34.618 "namespaces": [ 00:27:34.618 { 00:27:34.618 "nsid": 1, 00:27:34.618 "bdev_name": "Malloc0", 00:27:34.618 "name": "Malloc0", 00:27:34.618 "nguid": "63704227C12D43E591F1D2F9355820F6", 00:27:34.618 "uuid": "63704227-c12d-43e5-91f1-d2f9355820f6" 00:27:34.618 }, 00:27:34.618 { 00:27:34.618 "nsid": 2, 00:27:34.618 "bdev_name": "Malloc1", 00:27:34.618 "name": "Malloc1", 00:27:34.618 "nguid": "6C4D1A97C3E4462F9C92272F7B9072D3", 00:27:34.618 "uuid": "6c4d1a97-c3e4-462f-9c92-272f7b9072d3" 00:27:34.618 } 00:27:34.618 ] 00:27:34.618 } 00:27:34.618 ] 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2923357 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.618 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.879 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.879 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:34.879 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.879 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.879 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.879 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:34.880 rmmod nvme_tcp 00:27:34.880 rmmod nvme_fabrics 00:27:34.880 rmmod nvme_keyring 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2923088 ']' 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2923088 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 2923088 ']' 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 2923088 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:34.880 06:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2923088 00:27:34.880 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:34.880 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:34.880 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2923088' 00:27:34.880 killing process with pid 2923088 00:27:34.880 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 2923088 00:27:34.880 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 2923088 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.141 06:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.054 06:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.054 00:27:37.054 real 0m11.504s 00:27:37.054 user 0m8.155s 00:27:37.054 sys 0m6.158s 00:27:37.054 06:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:37.054 06:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:37.054 ************************************ 00:27:37.054 END TEST nvmf_aer 00:27:37.054 ************************************ 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.316 ************************************ 00:27:37.316 START TEST nvmf_async_init 00:27:37.316 ************************************ 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:37.316 * Looking for test storage... 00:27:37.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:27:37.316 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:37.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.578 --rc genhtml_branch_coverage=1 00:27:37.578 --rc genhtml_function_coverage=1 00:27:37.578 --rc genhtml_legend=1 00:27:37.578 --rc geninfo_all_blocks=1 00:27:37.578 --rc geninfo_unexecuted_blocks=1 00:27:37.578 00:27:37.578 ' 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:37.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.578 --rc genhtml_branch_coverage=1 00:27:37.578 --rc genhtml_function_coverage=1 00:27:37.578 --rc genhtml_legend=1 00:27:37.578 --rc geninfo_all_blocks=1 00:27:37.578 --rc geninfo_unexecuted_blocks=1 00:27:37.578 00:27:37.578 ' 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:37.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.578 --rc genhtml_branch_coverage=1 00:27:37.578 --rc genhtml_function_coverage=1 00:27:37.578 --rc genhtml_legend=1 00:27:37.578 --rc geninfo_all_blocks=1 00:27:37.578 --rc geninfo_unexecuted_blocks=1 00:27:37.578 00:27:37.578 ' 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:37.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.578 --rc genhtml_branch_coverage=1 00:27:37.578 --rc genhtml_function_coverage=1 00:27:37.578 --rc genhtml_legend=1 00:27:37.578 --rc geninfo_all_blocks=1 00:27:37.578 --rc geninfo_unexecuted_blocks=1 00:27:37.578 00:27:37.578 ' 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.578 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:37.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2d3cfea67f68461681a1f3105ec8a761 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.579 06:37:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:45.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:45.935 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:45.935 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.935 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:45.936 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.936 06:38:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:27:45.936 00:27:45.936 --- 10.0.0.2 ping statistics --- 00:27:45.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.936 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:27:45.936 00:27:45.936 --- 10.0.0.1 ping statistics --- 00:27:45.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.936 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2927697 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2927697 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 2927697 ']' 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:45.936 06:38:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:45.936 [2024-11-20 06:38:05.257152] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:27:45.936 [2024-11-20 06:38:05.257227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.936 [2024-11-20 06:38:05.356006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.936 [2024-11-20 06:38:05.407470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.936 [2024-11-20 06:38:05.407515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.936 [2024-11-20 06:38:05.407524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.936 [2024-11-20 06:38:05.407531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.936 [2024-11-20 06:38:05.407538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.936 [2024-11-20 06:38:05.408338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:45.936 [2024-11-20 06:38:06.123839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:45.936 null0 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2d3cfea67f68461681a1f3105ec8a761 00:27:45.936 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.937 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:45.937 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.937 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:45.937 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.937 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:45.937 [2024-11-20 06:38:06.184200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.937 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.937 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:45.937 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.937 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.198 nvme0n1 00:27:46.198 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.198 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:46.198 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.198 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.198 [ 00:27:46.198 { 00:27:46.198 "name": "nvme0n1", 00:27:46.198 "aliases": [ 00:27:46.198 "2d3cfea6-7f68-4616-81a1-f3105ec8a761" 00:27:46.198 ], 00:27:46.198 "product_name": "NVMe disk", 00:27:46.198 "block_size": 512, 00:27:46.198 "num_blocks": 2097152, 00:27:46.198 "uuid": "2d3cfea6-7f68-4616-81a1-f3105ec8a761", 00:27:46.198 "numa_id": 0, 00:27:46.198 "assigned_rate_limits": { 00:27:46.198 "rw_ios_per_sec": 0, 00:27:46.198 "rw_mbytes_per_sec": 0, 00:27:46.198 "r_mbytes_per_sec": 0, 00:27:46.198 "w_mbytes_per_sec": 0 00:27:46.198 }, 00:27:46.198 "claimed": false, 00:27:46.198 "zoned": false, 00:27:46.198 "supported_io_types": { 00:27:46.198 "read": true, 00:27:46.198 "write": true, 00:27:46.198 "unmap": false, 00:27:46.198 "flush": true, 00:27:46.198 "reset": true, 00:27:46.198 "nvme_admin": true, 00:27:46.198 "nvme_io": true, 00:27:46.198 "nvme_io_md": false, 00:27:46.198 "write_zeroes": true, 00:27:46.198 "zcopy": false, 00:27:46.198 "get_zone_info": false, 00:27:46.198 "zone_management": false, 00:27:46.198 "zone_append": false, 00:27:46.198 "compare": true, 00:27:46.198 "compare_and_write": true, 00:27:46.198 "abort": true, 00:27:46.198 "seek_hole": false, 00:27:46.198 "seek_data": false, 00:27:46.198 "copy": true, 00:27:46.198 "nvme_iov_md": false 00:27:46.198 }, 00:27:46.198 "memory_domains": [ 00:27:46.198 { 00:27:46.198 "dma_device_id": "system", 00:27:46.198 "dma_device_type": 1 00:27:46.198 } 00:27:46.198 ], 00:27:46.198 "driver_specific": { 00:27:46.198 "nvme": [ 00:27:46.198 { 00:27:46.198 "trid": { 00:27:46.198 "trtype": "TCP", 00:27:46.198 "adrfam": "IPv4", 00:27:46.198 "traddr": "10.0.0.2", 00:27:46.198 "trsvcid": "4420", 00:27:46.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:46.198 }, 00:27:46.198 "ctrlr_data": { 00:27:46.198 "cntlid": 1, 00:27:46.198 "vendor_id": "0x8086", 00:27:46.198 "model_number": "SPDK bdev Controller", 00:27:46.198 "serial_number": "00000000000000000000", 00:27:46.198 "firmware_revision": "25.01", 00:27:46.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:46.198 "oacs": { 00:27:46.198 "security": 0, 00:27:46.198 "format": 0, 00:27:46.198 "firmware": 0, 00:27:46.198 "ns_manage": 0 00:27:46.198 }, 00:27:46.198 "multi_ctrlr": true, 00:27:46.198 "ana_reporting": false 00:27:46.198 }, 00:27:46.198 "vs": { 00:27:46.198 "nvme_version": "1.3" 00:27:46.198 }, 00:27:46.198 "ns_data": { 00:27:46.198 "id": 1, 00:27:46.198 "can_share": true 00:27:46.198 } 00:27:46.198 } 00:27:46.198 ], 00:27:46.198 "mp_policy": "active_passive" 00:27:46.198 } 00:27:46.198 } 00:27:46.198 ] 00:27:46.198 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.198 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:46.198 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.198 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.198 [2024-11-20 06:38:06.460687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:46.198 [2024-11-20 06:38:06.460768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f6ce0 (9): Bad file descriptor 00:27:46.460 [2024-11-20 06:38:06.593268] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.460 [ 00:27:46.460 { 00:27:46.460 "name": "nvme0n1", 00:27:46.460 "aliases": [ 00:27:46.460 "2d3cfea6-7f68-4616-81a1-f3105ec8a761" 00:27:46.460 ], 00:27:46.460 "product_name": "NVMe disk", 00:27:46.460 "block_size": 512, 00:27:46.460 "num_blocks": 2097152, 00:27:46.460 "uuid": "2d3cfea6-7f68-4616-81a1-f3105ec8a761", 00:27:46.460 "numa_id": 0, 00:27:46.460 "assigned_rate_limits": { 00:27:46.460 "rw_ios_per_sec": 0, 00:27:46.460 "rw_mbytes_per_sec": 0, 00:27:46.460 "r_mbytes_per_sec": 0, 00:27:46.460 "w_mbytes_per_sec": 0 00:27:46.460 }, 00:27:46.460 "claimed": false, 00:27:46.460 "zoned": false, 00:27:46.460 "supported_io_types": { 00:27:46.460 "read": true, 00:27:46.460 "write": true, 00:27:46.460 "unmap": false, 00:27:46.460 "flush": true, 00:27:46.460 "reset": true, 00:27:46.460 "nvme_admin": true, 00:27:46.460 "nvme_io": true, 00:27:46.460 "nvme_io_md": false, 00:27:46.460 "write_zeroes": true, 00:27:46.460 "zcopy": false, 00:27:46.460 "get_zone_info": false, 00:27:46.460 "zone_management": false, 00:27:46.460 "zone_append": false, 00:27:46.460 "compare": true, 00:27:46.460 "compare_and_write": true, 00:27:46.460 "abort": true, 00:27:46.460 "seek_hole": false, 00:27:46.460 "seek_data": false, 00:27:46.460 "copy": true, 00:27:46.460 "nvme_iov_md": false 00:27:46.460 }, 00:27:46.460 "memory_domains": [ 00:27:46.460 { 00:27:46.460 "dma_device_id": "system", 00:27:46.460 "dma_device_type": 1 00:27:46.460 } 00:27:46.460 ], 00:27:46.460 "driver_specific": { 00:27:46.460 "nvme": [ 00:27:46.460 { 00:27:46.460 "trid": { 00:27:46.460 "trtype": "TCP", 00:27:46.460 "adrfam": "IPv4", 00:27:46.460 "traddr": "10.0.0.2", 00:27:46.460 "trsvcid": "4420", 00:27:46.460 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:46.460 }, 00:27:46.460 "ctrlr_data": { 00:27:46.460 "cntlid": 2, 00:27:46.460 "vendor_id": "0x8086", 00:27:46.460 "model_number": "SPDK bdev Controller", 00:27:46.460 "serial_number": "00000000000000000000", 00:27:46.460 "firmware_revision": "25.01", 00:27:46.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:46.460 "oacs": { 00:27:46.460 "security": 0, 00:27:46.460 "format": 0, 00:27:46.460 "firmware": 0, 00:27:46.460 "ns_manage": 0 00:27:46.460 }, 00:27:46.460 "multi_ctrlr": true, 00:27:46.460 "ana_reporting": false 00:27:46.460 }, 00:27:46.460 "vs": { 00:27:46.460 "nvme_version": "1.3" 00:27:46.460 }, 00:27:46.460 "ns_data": { 00:27:46.460 "id": 1, 00:27:46.460 "can_share": true 00:27:46.460 } 00:27:46.460 } 00:27:46.460 ], 00:27:46.460 "mp_policy": "active_passive" 00:27:46.460 } 00:27:46.460 } 00:27:46.460 ] 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.CEq3FK3H78 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.CEq3FK3H78 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.CEq3FK3H78 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.460 [2024-11-20 06:38:06.685395] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:46.460 [2024-11-20 06:38:06.685581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.460 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:46.461 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.461 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.461 [2024-11-20 06:38:06.709473] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:46.722 nvme0n1 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.722 [ 00:27:46.722 { 00:27:46.722 "name": "nvme0n1", 00:27:46.722 "aliases": [ 00:27:46.722 "2d3cfea6-7f68-4616-81a1-f3105ec8a761" 00:27:46.722 ], 00:27:46.722 "product_name": "NVMe disk", 00:27:46.722 "block_size": 512, 00:27:46.722 "num_blocks": 2097152, 00:27:46.722 "uuid": "2d3cfea6-7f68-4616-81a1-f3105ec8a761", 00:27:46.722 "numa_id": 0, 00:27:46.722 "assigned_rate_limits": { 00:27:46.722 "rw_ios_per_sec": 0, 00:27:46.722 "rw_mbytes_per_sec": 0, 00:27:46.722 "r_mbytes_per_sec": 0, 00:27:46.722 "w_mbytes_per_sec": 0 00:27:46.722 }, 00:27:46.722 "claimed": false, 00:27:46.722 "zoned": false, 00:27:46.722 "supported_io_types": { 00:27:46.722 "read": true, 00:27:46.722 "write": true, 00:27:46.722 "unmap": false, 00:27:46.722 "flush": true, 00:27:46.722 "reset": true, 00:27:46.722 "nvme_admin": true, 00:27:46.722 "nvme_io": true, 00:27:46.722 "nvme_io_md": false, 00:27:46.722 "write_zeroes": true, 00:27:46.722 "zcopy": false, 00:27:46.722 "get_zone_info": false, 00:27:46.722 "zone_management": false, 00:27:46.722 "zone_append": false, 00:27:46.722 "compare": true, 00:27:46.722 "compare_and_write": true, 00:27:46.722 "abort": true, 00:27:46.722 "seek_hole": false, 00:27:46.722 "seek_data": false, 00:27:46.722 "copy": true, 00:27:46.722 "nvme_iov_md": false 00:27:46.722 }, 00:27:46.722 "memory_domains": [ 00:27:46.722 { 00:27:46.722 "dma_device_id": "system", 00:27:46.722 "dma_device_type": 1 00:27:46.722 } 00:27:46.722 ], 00:27:46.722 "driver_specific": { 00:27:46.722 "nvme": [ 00:27:46.722 { 00:27:46.722 "trid": { 00:27:46.722 "trtype": "TCP", 00:27:46.722 "adrfam": "IPv4", 00:27:46.722 "traddr": "10.0.0.2", 00:27:46.722 "trsvcid": "4421", 00:27:46.722 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:46.722 }, 00:27:46.722 "ctrlr_data": { 00:27:46.722 "cntlid": 3, 00:27:46.722 "vendor_id": "0x8086", 00:27:46.722 "model_number": "SPDK bdev Controller", 00:27:46.722 "serial_number": "00000000000000000000", 00:27:46.722 "firmware_revision": "25.01", 00:27:46.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:46.722 "oacs": { 00:27:46.722 "security": 0, 00:27:46.722 "format": 0, 00:27:46.722 "firmware": 0, 00:27:46.722 "ns_manage": 0 00:27:46.722 }, 00:27:46.722 "multi_ctrlr": true, 00:27:46.722 "ana_reporting": false 00:27:46.722 }, 00:27:46.722 "vs": { 00:27:46.722 "nvme_version": "1.3" 00:27:46.722 }, 00:27:46.722 "ns_data": { 00:27:46.722 "id": 1, 00:27:46.722 "can_share": true 00:27:46.722 } 00:27:46.722 } 00:27:46.722 ], 00:27:46.722 "mp_policy": "active_passive" 00:27:46.722 } 00:27:46.722 } 00:27:46.722 ] 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.CEq3FK3H78 00:27:46.722 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:46.723 rmmod nvme_tcp 00:27:46.723 rmmod nvme_fabrics 00:27:46.723 rmmod nvme_keyring 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2927697 ']' 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2927697 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 2927697 ']' 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 2927697 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2927697 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2927697' 00:27:46.723 killing process with pid 2927697 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 2927697 00:27:46.723 06:38:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 2927697 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.984 06:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:49.533 00:27:49.533 real 0m11.810s 00:27:49.533 user 0m4.187s 00:27:49.533 sys 0m6.207s 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:49.533 ************************************ 00:27:49.533 END TEST nvmf_async_init 00:27:49.533 ************************************ 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.533 ************************************ 00:27:49.533 START TEST dma 00:27:49.533 ************************************ 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:49.533 * Looking for test storage... 00:27:49.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:49.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.533 --rc genhtml_branch_coverage=1 00:27:49.533 --rc genhtml_function_coverage=1 00:27:49.533 --rc genhtml_legend=1 00:27:49.533 --rc geninfo_all_blocks=1 00:27:49.533 --rc geninfo_unexecuted_blocks=1 00:27:49.533 00:27:49.533 ' 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:49.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.533 --rc genhtml_branch_coverage=1 00:27:49.533 --rc genhtml_function_coverage=1 00:27:49.533 --rc genhtml_legend=1 00:27:49.533 --rc geninfo_all_blocks=1 00:27:49.533 --rc geninfo_unexecuted_blocks=1 00:27:49.533 00:27:49.533 ' 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:49.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.533 --rc genhtml_branch_coverage=1 00:27:49.533 --rc genhtml_function_coverage=1 00:27:49.533 --rc genhtml_legend=1 00:27:49.533 --rc geninfo_all_blocks=1 00:27:49.533 --rc geninfo_unexecuted_blocks=1 00:27:49.533 00:27:49.533 ' 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:49.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.533 --rc genhtml_branch_coverage=1 00:27:49.533 --rc genhtml_function_coverage=1 00:27:49.533 --rc genhtml_legend=1 00:27:49.533 --rc geninfo_all_blocks=1 00:27:49.533 --rc geninfo_unexecuted_blocks=1 00:27:49.533 00:27:49.533 ' 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.533 06:38:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:49.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:49.534 00:27:49.534 real 0m0.235s 00:27:49.534 user 0m0.145s 00:27:49.534 sys 0m0.106s 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 ************************************ 00:27:49.534 END TEST dma 00:27:49.534 ************************************ 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 ************************************ 00:27:49.534 START TEST nvmf_identify 00:27:49.534 ************************************ 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:49.534 * Looking for test storage... 00:27:49.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.534 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:49.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.796 --rc genhtml_branch_coverage=1 00:27:49.796 --rc genhtml_function_coverage=1 00:27:49.796 --rc genhtml_legend=1 00:27:49.796 --rc geninfo_all_blocks=1 00:27:49.796 --rc geninfo_unexecuted_blocks=1 00:27:49.796 00:27:49.796 ' 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:49.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.796 --rc genhtml_branch_coverage=1 00:27:49.796 --rc genhtml_function_coverage=1 00:27:49.796 --rc genhtml_legend=1 00:27:49.796 --rc geninfo_all_blocks=1 00:27:49.796 --rc geninfo_unexecuted_blocks=1 00:27:49.796 00:27:49.796 ' 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:49.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.796 --rc genhtml_branch_coverage=1 00:27:49.796 --rc genhtml_function_coverage=1 00:27:49.796 --rc genhtml_legend=1 00:27:49.796 --rc geninfo_all_blocks=1 00:27:49.796 --rc geninfo_unexecuted_blocks=1 00:27:49.796 00:27:49.796 ' 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:49.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.796 --rc genhtml_branch_coverage=1 00:27:49.796 --rc genhtml_function_coverage=1 00:27:49.796 --rc genhtml_legend=1 00:27:49.796 --rc geninfo_all_blocks=1 00:27:49.796 --rc geninfo_unexecuted_blocks=1 00:27:49.796 00:27:49.796 ' 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.796 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:49.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:49.797 06:38:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:57.943 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.943 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:57.944 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:57.944 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:57.944 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:57.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:27:57.944 00:27:57.944 --- 10.0.0.2 ping statistics --- 00:27:57.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.944 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:27:57.944 00:27:57.944 --- 10.0.0.1 ping statistics --- 00:27:57.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.944 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2932280 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2932280 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 2932280 ']' 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.944 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:57.945 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.945 [2024-11-20 06:38:17.434822] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:27:57.945 [2024-11-20 06:38:17.434893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.945 [2024-11-20 06:38:17.537521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:57.945 [2024-11-20 06:38:17.592277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.945 [2024-11-20 06:38:17.592331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.945 [2024-11-20 06:38:17.592340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.945 [2024-11-20 06:38:17.592347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.945 [2024-11-20 06:38:17.592354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.945 [2024-11-20 06:38:17.594696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.945 [2024-11-20 06:38:17.594861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.945 [2024-11-20 06:38:17.595024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.945 [2024-11-20 06:38:17.595025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.206 [2024-11-20 06:38:18.271085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.206 Malloc0 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.206 [2024-11-20 06:38:18.389901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.206 [ 00:27:58.206 { 00:27:58.206 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:58.206 "subtype": "Discovery", 00:27:58.206 "listen_addresses": [ 00:27:58.206 { 00:27:58.206 "trtype": "TCP", 00:27:58.206 "adrfam": "IPv4", 00:27:58.206 "traddr": "10.0.0.2", 00:27:58.206 "trsvcid": "4420" 00:27:58.206 } 00:27:58.206 ], 00:27:58.206 "allow_any_host": true, 00:27:58.206 "hosts": [] 00:27:58.206 }, 00:27:58.206 { 00:27:58.206 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.206 "subtype": "NVMe", 00:27:58.206 "listen_addresses": [ 00:27:58.206 { 00:27:58.206 "trtype": "TCP", 00:27:58.206 "adrfam": "IPv4", 00:27:58.206 "traddr": "10.0.0.2", 00:27:58.206 "trsvcid": "4420" 00:27:58.206 } 00:27:58.206 ], 00:27:58.206 "allow_any_host": true, 00:27:58.206 "hosts": [], 00:27:58.206 "serial_number": "SPDK00000000000001", 00:27:58.206 "model_number": "SPDK bdev Controller", 00:27:58.206 "max_namespaces": 32, 00:27:58.206 "min_cntlid": 1, 00:27:58.206 "max_cntlid": 65519, 00:27:58.206 "namespaces": [ 00:27:58.206 { 00:27:58.206 "nsid": 1, 00:27:58.206 "bdev_name": "Malloc0", 00:27:58.206 "name": "Malloc0", 00:27:58.206 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:58.206 "eui64": "ABCDEF0123456789", 00:27:58.206 "uuid": "6b5c10f1-92ea-450e-82db-c03c92669e52" 00:27:58.206 } 00:27:58.206 ] 00:27:58.206 } 00:27:58.206 ] 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.206 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:58.206 [2024-11-20 06:38:18.456725] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:27:58.206 [2024-11-20 06:38:18.456813] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932458 ] 00:27:58.471 [2024-11-20 06:38:18.518901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:27:58.471 [2024-11-20 06:38:18.518977] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:58.471 [2024-11-20 06:38:18.518983] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:58.471 [2024-11-20 06:38:18.519004] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:58.471 [2024-11-20 06:38:18.519017] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:58.471 [2024-11-20 06:38:18.519854] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:27:58.471 [2024-11-20 06:38:18.519900] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10e1690 0 00:27:58.471 [2024-11-20 06:38:18.526180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:58.471 [2024-11-20 06:38:18.526197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:58.471 [2024-11-20 06:38:18.526203] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:58.471 [2024-11-20 06:38:18.526212] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:58.471 [2024-11-20 06:38:18.526256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.471 [2024-11-20 06:38:18.526263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.471 [2024-11-20 06:38:18.526268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e1690) 00:27:58.471 [2024-11-20 06:38:18.526286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:58.471 [2024-11-20 06:38:18.526310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143100, cid 0, qid 0 00:27:58.471 [2024-11-20 06:38:18.534173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.471 [2024-11-20 06:38:18.534183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.471 [2024-11-20 06:38:18.534187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.471 [2024-11-20 06:38:18.534192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143100) on tqpair=0x10e1690 00:27:58.471 [2024-11-20 06:38:18.534205] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:58.471 [2024-11-20 06:38:18.534214] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:27:58.471 [2024-11-20 06:38:18.534219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:27:58.471 [2024-11-20 06:38:18.534236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.471 [2024-11-20 06:38:18.534240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.471 [2024-11-20 06:38:18.534244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e1690) 00:27:58.471 [2024-11-20 06:38:18.534253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.471 [2024-11-20 06:38:18.534270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143100, cid 0, qid 0 00:27:58.471 [2024-11-20 06:38:18.534503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.471 [2024-11-20 06:38:18.534510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.471 [2024-11-20 06:38:18.534513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.471 [2024-11-20 06:38:18.534517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143100) on tqpair=0x10e1690 00:27:58.471 [2024-11-20 06:38:18.534524] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:27:58.471 [2024-11-20 06:38:18.534531] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:27:58.471 [2024-11-20 06:38:18.534539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.471 [2024-11-20 06:38:18.534543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.471 [2024-11-20 06:38:18.534546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e1690) 00:27:58.471 [2024-11-20 06:38:18.534553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.472 [2024-11-20 06:38:18.534564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143100, cid 0, qid 0 00:27:58.472 [2024-11-20 06:38:18.534789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.472 [2024-11-20 06:38:18.534796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.472 [2024-11-20 06:38:18.534799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.534803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143100) on tqpair=0x10e1690 00:27:58.472 [2024-11-20 06:38:18.534809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:27:58.472 [2024-11-20 06:38:18.534818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:58.472 [2024-11-20 06:38:18.534829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.534833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.534836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e1690) 00:27:58.472 [2024-11-20 06:38:18.534843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.472 [2024-11-20 06:38:18.534854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143100, cid 0, qid 0 00:27:58.472 [2024-11-20 06:38:18.535092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.472 [2024-11-20 06:38:18.535098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.472 [2024-11-20 06:38:18.535102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.535106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143100) on tqpair=0x10e1690 00:27:58.472 [2024-11-20 06:38:18.535111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:58.472 [2024-11-20 06:38:18.535121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.535125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.535129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e1690) 00:27:58.472 [2024-11-20 06:38:18.535136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.472 [2024-11-20 06:38:18.535146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143100, cid 0, qid 0 00:27:58.472 [2024-11-20 06:38:18.535342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.472 [2024-11-20 06:38:18.535349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.472 [2024-11-20 06:38:18.535353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.535357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143100) on tqpair=0x10e1690 00:27:58.472 [2024-11-20 06:38:18.535362] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:58.472 [2024-11-20 06:38:18.535367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:58.472 [2024-11-20 06:38:18.535375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:58.472 [2024-11-20 06:38:18.535488] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:27:58.472 [2024-11-20 06:38:18.535493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:58.472 [2024-11-20 06:38:18.535503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.535507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.535511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e1690) 00:27:58.472 [2024-11-20 06:38:18.535518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.472 [2024-11-20 06:38:18.535528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143100, cid 0, qid 0 00:27:58.472 [2024-11-20 06:38:18.535741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.472 [2024-11-20 06:38:18.535748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.472 [2024-11-20 06:38:18.535751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.535755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143100) on tqpair=0x10e1690 00:27:58.472 [2024-11-20 06:38:18.535763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:58.472 [2024-11-20 06:38:18.535773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.535777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.535780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e1690) 00:27:58.472 [2024-11-20 06:38:18.535787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.472 [2024-11-20 06:38:18.535797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143100, cid 0, qid 0 00:27:58.472 [2024-11-20 06:38:18.536048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.472 [2024-11-20 06:38:18.536054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.472 [2024-11-20 06:38:18.536057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.536061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143100) on tqpair=0x10e1690 00:27:58.472 [2024-11-20 06:38:18.536066] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:58.472 [2024-11-20 06:38:18.536071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:58.472 [2024-11-20 06:38:18.536079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:27:58.472 [2024-11-20 06:38:18.536088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:58.472 [2024-11-20 06:38:18.536098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.536102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e1690) 00:27:58.472 [2024-11-20 06:38:18.536109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.472 [2024-11-20 06:38:18.536119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143100, cid 0, qid 0 00:27:58.472 [2024-11-20 06:38:18.536380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.472 [2024-11-20 06:38:18.536388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.472 [2024-11-20 06:38:18.536391] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.536396] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e1690): datao=0, datal=4096, cccid=0 00:27:58.472 [2024-11-20 06:38:18.536401] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1143100) on tqpair(0x10e1690): expected_datao=0, payload_size=4096 00:27:58.472 [2024-11-20 06:38:18.536406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.536414] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.536419] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.536602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.472 [2024-11-20 06:38:18.536609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.472 [2024-11-20 06:38:18.536612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.536616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143100) on tqpair=0x10e1690 00:27:58.472 [2024-11-20 06:38:18.536625] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:27:58.472 [2024-11-20 06:38:18.536631] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:27:58.472 [2024-11-20 06:38:18.536639] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:27:58.472 [2024-11-20 06:38:18.536649] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:27:58.472 [2024-11-20 06:38:18.536654] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:27:58.472 [2024-11-20 06:38:18.536659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:27:58.472 [2024-11-20 06:38:18.536671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:58.472 [2024-11-20 06:38:18.536679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.536683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.536686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e1690) 00:27:58.472 [2024-11-20 06:38:18.536694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:58.472 [2024-11-20 06:38:18.536705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143100, cid 0, qid 0 00:27:58.472 [2024-11-20 06:38:18.536957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.472 [2024-11-20 06:38:18.536963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.472 [2024-11-20 06:38:18.536966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.472 [2024-11-20 06:38:18.536970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143100) on tqpair=0x10e1690 00:27:58.473 [2024-11-20 06:38:18.536979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.536983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.536987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e1690) 00:27:58.473 [2024-11-20 06:38:18.536993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.473 [2024-11-20 06:38:18.537000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10e1690) 00:27:58.473 [2024-11-20 06:38:18.537013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.473 [2024-11-20 06:38:18.537019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10e1690) 00:27:58.473 [2024-11-20 06:38:18.537032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.473 [2024-11-20 06:38:18.537039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.473 [2024-11-20 06:38:18.537052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.473 [2024-11-20 06:38:18.537057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:58.473 [2024-11-20 06:38:18.537065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:58.473 [2024-11-20 06:38:18.537071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e1690) 00:27:58.473 [2024-11-20 06:38:18.537084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.473 [2024-11-20 06:38:18.537096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143100, cid 0, qid 0 00:27:58.473 [2024-11-20 06:38:18.537101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143280, cid 1, qid 0 00:27:58.473 [2024-11-20 06:38:18.537106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143400, cid 2, qid 0 00:27:58.473 [2024-11-20 06:38:18.537111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.473 [2024-11-20 06:38:18.537116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143700, cid 4, qid 0 00:27:58.473 [2024-11-20 06:38:18.537384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.473 [2024-11-20 06:38:18.537391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.473 [2024-11-20 06:38:18.537394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143700) on tqpair=0x10e1690 00:27:58.473 [2024-11-20 06:38:18.537407] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:27:58.473 [2024-11-20 06:38:18.537413] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:27:58.473 [2024-11-20 06:38:18.537424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e1690) 00:27:58.473 [2024-11-20 06:38:18.537435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.473 [2024-11-20 06:38:18.537446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143700, cid 4, qid 0 00:27:58.473 [2024-11-20 06:38:18.537623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.473 [2024-11-20 06:38:18.537629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.473 [2024-11-20 06:38:18.537633] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537637] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e1690): datao=0, datal=4096, cccid=4 00:27:58.473 [2024-11-20 06:38:18.537641] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1143700) on tqpair(0x10e1690): expected_datao=0, payload_size=4096 00:27:58.473 [2024-11-20 06:38:18.537645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537662] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537666] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.473 [2024-11-20 06:38:18.537843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.473 [2024-11-20 06:38:18.537847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143700) on tqpair=0x10e1690 00:27:58.473 [2024-11-20 06:38:18.537863] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:27:58.473 [2024-11-20 06:38:18.537891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e1690) 00:27:58.473 [2024-11-20 06:38:18.537903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.473 [2024-11-20 06:38:18.537910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.537920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10e1690) 00:27:58.473 [2024-11-20 06:38:18.537926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.473 [2024-11-20 06:38:18.537941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143700, cid 4, qid 0 00:27:58.473 [2024-11-20 06:38:18.537946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143880, cid 5, qid 0 00:27:58.473 [2024-11-20 06:38:18.542172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.473 [2024-11-20 06:38:18.542181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.473 [2024-11-20 06:38:18.542185] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.542189] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e1690): datao=0, datal=1024, cccid=4 00:27:58.473 [2024-11-20 06:38:18.542193] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1143700) on tqpair(0x10e1690): expected_datao=0, payload_size=1024 00:27:58.473 [2024-11-20 06:38:18.542198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.542205] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.542208] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.542214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.473 [2024-11-20 06:38:18.542220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.473 [2024-11-20 06:38:18.542223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.542227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143880) on tqpair=0x10e1690 00:27:58.473 [2024-11-20 06:38:18.582169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.473 [2024-11-20 06:38:18.582179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.473 [2024-11-20 06:38:18.582183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.582187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143700) on tqpair=0x10e1690 00:27:58.473 [2024-11-20 06:38:18.582201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.582205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e1690) 00:27:58.473 [2024-11-20 06:38:18.582213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.473 [2024-11-20 06:38:18.582230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143700, cid 4, qid 0 00:27:58.473 [2024-11-20 06:38:18.582440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.473 [2024-11-20 06:38:18.582447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.473 [2024-11-20 06:38:18.582451] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.582454] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e1690): datao=0, datal=3072, cccid=4 00:27:58.473 [2024-11-20 06:38:18.582459] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1143700) on tqpair(0x10e1690): expected_datao=0, payload_size=3072 00:27:58.473 [2024-11-20 06:38:18.582463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.582470] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.582474] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.582625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.473 [2024-11-20 06:38:18.582632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.473 [2024-11-20 06:38:18.582635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.582639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143700) on tqpair=0x10e1690 00:27:58.473 [2024-11-20 06:38:18.582652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.473 [2024-11-20 06:38:18.582657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e1690) 00:27:58.474 [2024-11-20 06:38:18.582663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.474 [2024-11-20 06:38:18.582678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143700, cid 4, qid 0 00:27:58.474 [2024-11-20 06:38:18.582914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.474 [2024-11-20 06:38:18.582920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.474 [2024-11-20 06:38:18.582923] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.474 [2024-11-20 06:38:18.582927] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e1690): datao=0, datal=8, cccid=4 00:27:58.474 [2024-11-20 06:38:18.582931] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1143700) on tqpair(0x10e1690): expected_datao=0, payload_size=8 00:27:58.474 [2024-11-20 06:38:18.582936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.474 [2024-11-20 06:38:18.582942] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.474 [2024-11-20 06:38:18.582946] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.474 [2024-11-20 06:38:18.626186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.474 [2024-11-20 06:38:18.626201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.474 [2024-11-20 06:38:18.626205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.474 [2024-11-20 06:38:18.626209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143700) on tqpair=0x10e1690 00:27:58.474 ===================================================== 00:27:58.474 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:58.474 ===================================================== 00:27:58.474 Controller Capabilities/Features 00:27:58.474 ================================ 00:27:58.474 Vendor ID: 0000 00:27:58.474 Subsystem Vendor ID: 0000 00:27:58.474 Serial Number: .................... 00:27:58.474 Model Number: ........................................ 00:27:58.474 Firmware Version: 25.01 00:27:58.474 Recommended Arb Burst: 0 00:27:58.474 IEEE OUI Identifier: 00 00 00 00:27:58.474 Multi-path I/O 00:27:58.474 May have multiple subsystem ports: No 00:27:58.474 May have multiple controllers: No 00:27:58.474 Associated with SR-IOV VF: No 00:27:58.474 Max Data Transfer Size: 131072 00:27:58.474 Max Number of Namespaces: 0 00:27:58.474 Max Number of I/O Queues: 1024 00:27:58.474 NVMe Specification Version (VS): 1.3 00:27:58.474 NVMe Specification Version (Identify): 1.3 00:27:58.474 Maximum Queue Entries: 128 00:27:58.474 Contiguous Queues Required: Yes 00:27:58.474 Arbitration Mechanisms Supported 00:27:58.474 Weighted Round Robin: Not Supported 00:27:58.474 Vendor Specific: Not Supported 00:27:58.474 Reset Timeout: 15000 ms 00:27:58.474 Doorbell Stride: 4 bytes 00:27:58.474 NVM Subsystem Reset: Not Supported 00:27:58.474 Command Sets Supported 00:27:58.474 NVM Command Set: Supported 00:27:58.474 Boot Partition: Not Supported 00:27:58.474 Memory Page Size Minimum: 4096 bytes 00:27:58.474 Memory Page Size Maximum: 4096 bytes 00:27:58.474 Persistent Memory Region: Not Supported 00:27:58.474 Optional Asynchronous Events Supported 00:27:58.474 Namespace Attribute Notices: Not Supported 00:27:58.474 Firmware Activation Notices: Not Supported 00:27:58.474 ANA Change Notices: Not Supported 00:27:58.474 PLE Aggregate Log Change Notices: Not Supported 00:27:58.474 LBA Status Info Alert Notices: Not Supported 00:27:58.474 EGE Aggregate Log Change Notices: Not Supported 00:27:58.474 Normal NVM Subsystem Shutdown event: Not Supported 00:27:58.474 Zone Descriptor Change Notices: Not Supported 00:27:58.474 Discovery Log Change Notices: Supported 00:27:58.474 Controller Attributes 00:27:58.474 128-bit Host Identifier: Not Supported 00:27:58.474 Non-Operational Permissive Mode: Not Supported 00:27:58.474 NVM Sets: Not Supported 00:27:58.474 Read Recovery Levels: Not Supported 00:27:58.474 Endurance Groups: Not Supported 00:27:58.474 Predictable Latency Mode: Not Supported 00:27:58.474 Traffic Based Keep ALive: Not Supported 00:27:58.474 Namespace Granularity: Not Supported 00:27:58.474 SQ Associations: Not Supported 00:27:58.474 UUID List: Not Supported 00:27:58.474 Multi-Domain Subsystem: Not Supported 00:27:58.474 Fixed Capacity Management: Not Supported 00:27:58.474 Variable Capacity Management: Not Supported 00:27:58.474 Delete Endurance Group: Not Supported 00:27:58.474 Delete NVM Set: Not Supported 00:27:58.474 Extended LBA Formats Supported: Not Supported 00:27:58.474 Flexible Data Placement Supported: Not Supported 00:27:58.474 00:27:58.474 Controller Memory Buffer Support 00:27:58.474 ================================ 00:27:58.474 Supported: No 00:27:58.474 00:27:58.474 Persistent Memory Region Support 00:27:58.474 ================================ 00:27:58.474 Supported: No 00:27:58.474 00:27:58.474 Admin Command Set Attributes 00:27:58.474 ============================ 00:27:58.474 Security Send/Receive: Not Supported 00:27:58.474 Format NVM: Not Supported 00:27:58.474 Firmware Activate/Download: Not Supported 00:27:58.474 Namespace Management: Not Supported 00:27:58.474 Device Self-Test: Not Supported 00:27:58.474 Directives: Not Supported 00:27:58.474 NVMe-MI: Not Supported 00:27:58.474 Virtualization Management: Not Supported 00:27:58.474 Doorbell Buffer Config: Not Supported 00:27:58.474 Get LBA Status Capability: Not Supported 00:27:58.474 Command & Feature Lockdown Capability: Not Supported 00:27:58.474 Abort Command Limit: 1 00:27:58.474 Async Event Request Limit: 4 00:27:58.474 Number of Firmware Slots: N/A 00:27:58.474 Firmware Slot 1 Read-Only: N/A 00:27:58.474 Firmware Activation Without Reset: N/A 00:27:58.474 Multiple Update Detection Support: N/A 00:27:58.474 Firmware Update Granularity: No Information Provided 00:27:58.474 Per-Namespace SMART Log: No 00:27:58.474 Asymmetric Namespace Access Log Page: Not Supported 00:27:58.474 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:58.474 Command Effects Log Page: Not Supported 00:27:58.474 Get Log Page Extended Data: Supported 00:27:58.474 Telemetry Log Pages: Not Supported 00:27:58.474 Persistent Event Log Pages: Not Supported 00:27:58.474 Supported Log Pages Log Page: May Support 00:27:58.474 Commands Supported & Effects Log Page: Not Supported 00:27:58.474 Feature Identifiers & Effects Log Page:May Support 00:27:58.474 NVMe-MI Commands & Effects Log Page: May Support 00:27:58.474 Data Area 4 for Telemetry Log: Not Supported 00:27:58.474 Error Log Page Entries Supported: 128 00:27:58.474 Keep Alive: Not Supported 00:27:58.474 00:27:58.474 NVM Command Set Attributes 00:27:58.474 ========================== 00:27:58.474 Submission Queue Entry Size 00:27:58.474 Max: 1 00:27:58.474 Min: 1 00:27:58.474 Completion Queue Entry Size 00:27:58.474 Max: 1 00:27:58.474 Min: 1 00:27:58.474 Number of Namespaces: 0 00:27:58.474 Compare Command: Not Supported 00:27:58.474 Write Uncorrectable Command: Not Supported 00:27:58.474 Dataset Management Command: Not Supported 00:27:58.474 Write Zeroes Command: Not Supported 00:27:58.475 Set Features Save Field: Not Supported 00:27:58.475 Reservations: Not Supported 00:27:58.475 Timestamp: Not Supported 00:27:58.475 Copy: Not Supported 00:27:58.475 Volatile Write Cache: Not Present 00:27:58.475 Atomic Write Unit (Normal): 1 00:27:58.475 Atomic Write Unit (PFail): 1 00:27:58.475 Atomic Compare & Write Unit: 1 00:27:58.475 Fused Compare & Write: Supported 00:27:58.475 Scatter-Gather List 00:27:58.475 SGL Command Set: Supported 00:27:58.475 SGL Keyed: Supported 00:27:58.475 SGL Bit Bucket Descriptor: Not Supported 00:27:58.475 SGL Metadata Pointer: Not Supported 00:27:58.475 Oversized SGL: Not Supported 00:27:58.475 SGL Metadata Address: Not Supported 00:27:58.475 SGL Offset: Supported 00:27:58.475 Transport SGL Data Block: Not Supported 00:27:58.475 Replay Protected Memory Block: Not Supported 00:27:58.475 00:27:58.475 Firmware Slot Information 00:27:58.475 ========================= 00:27:58.475 Active slot: 0 00:27:58.475 00:27:58.475 00:27:58.475 Error Log 00:27:58.475 ========= 00:27:58.475 00:27:58.475 Active Namespaces 00:27:58.475 ================= 00:27:58.475 Discovery Log Page 00:27:58.475 ================== 00:27:58.475 Generation Counter: 2 00:27:58.475 Number of Records: 2 00:27:58.475 Record Format: 0 00:27:58.475 00:27:58.475 Discovery Log Entry 0 00:27:58.475 ---------------------- 00:27:58.475 Transport Type: 3 (TCP) 00:27:58.475 Address Family: 1 (IPv4) 00:27:58.475 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:58.475 Entry Flags: 00:27:58.475 Duplicate Returned Information: 1 00:27:58.475 Explicit Persistent Connection Support for Discovery: 1 00:27:58.475 Transport Requirements: 00:27:58.475 Secure Channel: Not Required 00:27:58.475 Port ID: 0 (0x0000) 00:27:58.475 Controller ID: 65535 (0xffff) 00:27:58.475 Admin Max SQ Size: 128 00:27:58.475 Transport Service Identifier: 4420 00:27:58.475 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:58.475 Transport Address: 10.0.0.2 00:27:58.475 Discovery Log Entry 1 00:27:58.475 ---------------------- 00:27:58.475 Transport Type: 3 (TCP) 00:27:58.475 Address Family: 1 (IPv4) 00:27:58.475 Subsystem Type: 2 (NVM Subsystem) 00:27:58.475 Entry Flags: 00:27:58.475 Duplicate Returned Information: 0 00:27:58.475 Explicit Persistent Connection Support for Discovery: 0 00:27:58.475 Transport Requirements: 00:27:58.475 Secure Channel: Not Required 00:27:58.475 Port ID: 0 (0x0000) 00:27:58.475 Controller ID: 65535 (0xffff) 00:27:58.475 Admin Max SQ Size: 128 00:27:58.475 Transport Service Identifier: 4420 00:27:58.475 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:58.475 Transport Address: 10.0.0.2 [2024-11-20 06:38:18.626314] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:27:58.475 [2024-11-20 06:38:18.626327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143100) on tqpair=0x10e1690 00:27:58.475 [2024-11-20 06:38:18.626335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.475 [2024-11-20 06:38:18.626341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143280) on tqpair=0x10e1690 00:27:58.475 [2024-11-20 06:38:18.626345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.475 [2024-11-20 06:38:18.626350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143400) on tqpair=0x10e1690 00:27:58.475 [2024-11-20 06:38:18.626355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.475 [2024-11-20 06:38:18.626360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.475 [2024-11-20 06:38:18.626364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.475 [2024-11-20 06:38:18.626377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.475 [2024-11-20 06:38:18.626381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.475 [2024-11-20 06:38:18.626384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.475 [2024-11-20 06:38:18.626394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.475 [2024-11-20 06:38:18.626410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.475 [2024-11-20 06:38:18.626607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.475 [2024-11-20 06:38:18.626613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.475 [2024-11-20 06:38:18.626617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.475 [2024-11-20 06:38:18.626621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.475 [2024-11-20 06:38:18.626631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.475 [2024-11-20 06:38:18.626635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.475 [2024-11-20 06:38:18.626639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.475 [2024-11-20 06:38:18.626645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.475 [2024-11-20 06:38:18.626659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.475 [2024-11-20 06:38:18.626868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.475 [2024-11-20 06:38:18.626875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.475 [2024-11-20 06:38:18.626878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.475 [2024-11-20 06:38:18.626882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.475 [2024-11-20 06:38:18.626888] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:27:58.475 [2024-11-20 06:38:18.626893] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:27:58.476 [2024-11-20 06:38:18.626902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.626906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.626910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.476 [2024-11-20 06:38:18.626917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.476 [2024-11-20 06:38:18.626927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.476 [2024-11-20 06:38:18.627144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.476 [2024-11-20 06:38:18.627150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.476 [2024-11-20 06:38:18.627153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.476 [2024-11-20 06:38:18.627176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.476 [2024-11-20 06:38:18.627190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.476 [2024-11-20 06:38:18.627201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.476 [2024-11-20 06:38:18.627420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.476 [2024-11-20 06:38:18.627427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.476 [2024-11-20 06:38:18.627430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.476 [2024-11-20 06:38:18.627444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.476 [2024-11-20 06:38:18.627458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.476 [2024-11-20 06:38:18.627468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.476 [2024-11-20 06:38:18.627657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.476 [2024-11-20 06:38:18.627663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.476 [2024-11-20 06:38:18.627669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.476 [2024-11-20 06:38:18.627683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.476 [2024-11-20 06:38:18.627697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.476 [2024-11-20 06:38:18.627707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.476 [2024-11-20 06:38:18.627899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.476 [2024-11-20 06:38:18.627906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.476 [2024-11-20 06:38:18.627909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.476 [2024-11-20 06:38:18.627923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.627930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.476 [2024-11-20 06:38:18.627937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.476 [2024-11-20 06:38:18.627947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.476 [2024-11-20 06:38:18.628125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.476 [2024-11-20 06:38:18.628132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.476 [2024-11-20 06:38:18.628135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.476 [2024-11-20 06:38:18.628149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.476 [2024-11-20 06:38:18.628170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.476 [2024-11-20 06:38:18.628180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.476 [2024-11-20 06:38:18.628366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.476 [2024-11-20 06:38:18.628372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.476 [2024-11-20 06:38:18.628376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.476 [2024-11-20 06:38:18.628389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.476 [2024-11-20 06:38:18.628404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.476 [2024-11-20 06:38:18.628415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.476 [2024-11-20 06:38:18.628582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.476 [2024-11-20 06:38:18.628589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.476 [2024-11-20 06:38:18.628592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.476 [2024-11-20 06:38:18.628608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.476 [2024-11-20 06:38:18.628623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.476 [2024-11-20 06:38:18.628634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.476 [2024-11-20 06:38:18.628821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.476 [2024-11-20 06:38:18.628827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.476 [2024-11-20 06:38:18.628831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.476 [2024-11-20 06:38:18.628844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.628852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.476 [2024-11-20 06:38:18.628859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.476 [2024-11-20 06:38:18.628869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.476 [2024-11-20 06:38:18.629037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.476 [2024-11-20 06:38:18.629044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.476 [2024-11-20 06:38:18.629047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.629051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.476 [2024-11-20 06:38:18.629061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.629065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.476 [2024-11-20 06:38:18.629068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.477 [2024-11-20 06:38:18.629075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.477 [2024-11-20 06:38:18.629085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.477 [2024-11-20 06:38:18.629260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.477 [2024-11-20 06:38:18.629266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.477 [2024-11-20 06:38:18.629270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.629274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.477 [2024-11-20 06:38:18.629284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.629288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.629291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.477 [2024-11-20 06:38:18.629298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.477 [2024-11-20 06:38:18.629308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.477 [2024-11-20 06:38:18.629483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.477 [2024-11-20 06:38:18.629489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.477 [2024-11-20 06:38:18.629493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.629497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.477 [2024-11-20 06:38:18.629509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.629513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.629517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.477 [2024-11-20 06:38:18.629523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.477 [2024-11-20 06:38:18.629534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.477 [2024-11-20 06:38:18.629749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.477 [2024-11-20 06:38:18.629756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.477 [2024-11-20 06:38:18.629759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.629763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.477 [2024-11-20 06:38:18.629774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.629778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.629781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.477 [2024-11-20 06:38:18.629788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.477 [2024-11-20 06:38:18.629798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.477 [2024-11-20 06:38:18.629983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.477 [2024-11-20 06:38:18.629990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.477 [2024-11-20 06:38:18.629993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.629997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.477 [2024-11-20 06:38:18.630007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.630011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.630014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.477 [2024-11-20 06:38:18.630021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.477 [2024-11-20 06:38:18.630031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.477 [2024-11-20 06:38:18.634170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.477 [2024-11-20 06:38:18.634179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.477 [2024-11-20 06:38:18.634182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.634186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.477 [2024-11-20 06:38:18.634196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.634200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.634203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e1690) 00:27:58.477 [2024-11-20 06:38:18.634210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.477 [2024-11-20 06:38:18.634222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1143580, cid 3, qid 0 00:27:58.477 [2024-11-20 06:38:18.634419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.477 [2024-11-20 06:38:18.634425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.477 [2024-11-20 06:38:18.634428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.477 [2024-11-20 06:38:18.634432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1143580) on tqpair=0x10e1690 00:27:58.477 [2024-11-20 06:38:18.634440] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:27:58.477 00:27:58.477 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:58.477 [2024-11-20 06:38:18.680976] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:27:58.477 [2024-11-20 06:38:18.681023] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932466 ] 00:27:58.477 [2024-11-20 06:38:18.734666] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:27:58.477 [2024-11-20 06:38:18.734733] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:58.477 [2024-11-20 06:38:18.734739] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:58.477 [2024-11-20 06:38:18.734757] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:58.477 [2024-11-20 06:38:18.734771] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:58.478 [2024-11-20 06:38:18.738468] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:27:58.478 [2024-11-20 06:38:18.738510] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20b1690 0 00:27:58.742 [2024-11-20 06:38:18.746176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:58.742 [2024-11-20 06:38:18.746196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:58.742 [2024-11-20 06:38:18.746201] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:58.742 [2024-11-20 06:38:18.746204] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:58.742 [2024-11-20 06:38:18.746247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.746253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.746258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b1690) 00:27:58.743 [2024-11-20 06:38:18.746272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:58.743 [2024-11-20 06:38:18.746297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113100, cid 0, qid 0 00:27:58.743 [2024-11-20 06:38:18.757174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.743 [2024-11-20 06:38:18.757187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.743 [2024-11-20 06:38:18.757195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.757203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113100) on tqpair=0x20b1690 00:27:58.743 [2024-11-20 06:38:18.757217] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:58.743 [2024-11-20 06:38:18.757226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:27:58.743 [2024-11-20 06:38:18.757235] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:27:58.743 [2024-11-20 06:38:18.757254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.757260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.757264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b1690) 00:27:58.743 [2024-11-20 06:38:18.757276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.743 [2024-11-20 06:38:18.757300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113100, cid 0, qid 0 00:27:58.743 [2024-11-20 06:38:18.757521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.743 [2024-11-20 06:38:18.757529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.743 [2024-11-20 06:38:18.757533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.757537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113100) on tqpair=0x20b1690 00:27:58.743 [2024-11-20 06:38:18.757542] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:27:58.743 [2024-11-20 06:38:18.757550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:27:58.743 [2024-11-20 06:38:18.757557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.757561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.757565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b1690) 00:27:58.743 [2024-11-20 06:38:18.757571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.743 [2024-11-20 06:38:18.757582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113100, cid 0, qid 0 00:27:58.743 [2024-11-20 06:38:18.757797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.743 [2024-11-20 06:38:18.757804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.743 [2024-11-20 06:38:18.757807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.757811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113100) on tqpair=0x20b1690 00:27:58.743 [2024-11-20 06:38:18.757816] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:27:58.743 [2024-11-20 06:38:18.757827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:58.743 [2024-11-20 06:38:18.757833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.757837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.757841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b1690) 00:27:58.743 [2024-11-20 06:38:18.757847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.743 [2024-11-20 06:38:18.757858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113100, cid 0, qid 0 00:27:58.743 [2024-11-20 06:38:18.758066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.743 [2024-11-20 06:38:18.758074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.743 [2024-11-20 06:38:18.758077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.758083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113100) on tqpair=0x20b1690 00:27:58.743 [2024-11-20 06:38:18.758091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:58.743 [2024-11-20 06:38:18.758102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.758106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.758110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b1690) 00:27:58.743 [2024-11-20 06:38:18.758116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.743 [2024-11-20 06:38:18.758129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113100, cid 0, qid 0 00:27:58.743 [2024-11-20 06:38:18.758313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.743 [2024-11-20 06:38:18.758320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.743 [2024-11-20 06:38:18.758327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.758331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113100) on tqpair=0x20b1690 00:27:58.743 [2024-11-20 06:38:18.758336] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:58.743 [2024-11-20 06:38:18.758344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:58.743 [2024-11-20 06:38:18.758352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:58.743 [2024-11-20 06:38:18.758462] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:27:58.743 [2024-11-20 06:38:18.758472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:58.743 [2024-11-20 06:38:18.758484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.758490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.758494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b1690) 00:27:58.743 [2024-11-20 06:38:18.758501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.743 [2024-11-20 06:38:18.758514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113100, cid 0, qid 0 00:27:58.743 [2024-11-20 06:38:18.758705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.743 [2024-11-20 06:38:18.758712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.743 [2024-11-20 06:38:18.758718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.758722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113100) on tqpair=0x20b1690 00:27:58.743 [2024-11-20 06:38:18.758731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:58.743 [2024-11-20 06:38:18.758741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.758745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.758748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b1690) 00:27:58.743 [2024-11-20 06:38:18.758755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.743 [2024-11-20 06:38:18.758765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113100, cid 0, qid 0 00:27:58.743 [2024-11-20 06:38:18.758943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.743 [2024-11-20 06:38:18.758949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.743 [2024-11-20 06:38:18.758953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.758957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113100) on tqpair=0x20b1690 00:27:58.743 [2024-11-20 06:38:18.758961] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:58.743 [2024-11-20 06:38:18.758966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:58.743 [2024-11-20 06:38:18.758975] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:27:58.743 [2024-11-20 06:38:18.758989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:58.743 [2024-11-20 06:38:18.759000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.759004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b1690) 00:27:58.743 [2024-11-20 06:38:18.759013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.743 [2024-11-20 06:38:18.759024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113100, cid 0, qid 0 00:27:58.743 [2024-11-20 06:38:18.759287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.743 [2024-11-20 06:38:18.759294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.743 [2024-11-20 06:38:18.759298] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.759303] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b1690): datao=0, datal=4096, cccid=0 00:27:58.743 [2024-11-20 06:38:18.759308] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113100) on tqpair(0x20b1690): expected_datao=0, payload_size=4096 00:27:58.743 [2024-11-20 06:38:18.759312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.759320] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.759324] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.759463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.743 [2024-11-20 06:38:18.759470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.743 [2024-11-20 06:38:18.759473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.743 [2024-11-20 06:38:18.759477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113100) on tqpair=0x20b1690 00:27:58.743 [2024-11-20 06:38:18.759485] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:27:58.743 [2024-11-20 06:38:18.759490] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:27:58.744 [2024-11-20 06:38:18.759495] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:27:58.744 [2024-11-20 06:38:18.759506] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:27:58.744 [2024-11-20 06:38:18.759511] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:27:58.744 [2024-11-20 06:38:18.759516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.759527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.759534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b1690) 00:27:58.744 [2024-11-20 06:38:18.759549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:58.744 [2024-11-20 06:38:18.759560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113100, cid 0, qid 0 00:27:58.744 [2024-11-20 06:38:18.759734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.744 [2024-11-20 06:38:18.759742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.744 [2024-11-20 06:38:18.759746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113100) on tqpair=0x20b1690 00:27:58.744 [2024-11-20 06:38:18.759758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20b1690) 00:27:58.744 [2024-11-20 06:38:18.759771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.744 [2024-11-20 06:38:18.759792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20b1690) 00:27:58.744 [2024-11-20 06:38:18.759805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.744 [2024-11-20 06:38:18.759811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20b1690) 00:27:58.744 [2024-11-20 06:38:18.759825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.744 [2024-11-20 06:38:18.759832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b1690) 00:27:58.744 [2024-11-20 06:38:18.759845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.744 [2024-11-20 06:38:18.759850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.759858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.759865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.759869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b1690) 00:27:58.744 [2024-11-20 06:38:18.759875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.744 [2024-11-20 06:38:18.759888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113100, cid 0, qid 0 00:27:58.744 [2024-11-20 06:38:18.759893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113280, cid 1, qid 0 00:27:58.744 [2024-11-20 06:38:18.759898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113400, cid 2, qid 0 00:27:58.744 [2024-11-20 06:38:18.759903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113580, cid 3, qid 0 00:27:58.744 [2024-11-20 06:38:18.759908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113700, cid 4, qid 0 00:27:58.744 [2024-11-20 06:38:18.760135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.744 [2024-11-20 06:38:18.760142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.744 [2024-11-20 06:38:18.760145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.760149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113700) on tqpair=0x20b1690 00:27:58.744 [2024-11-20 06:38:18.760170] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:27:58.744 [2024-11-20 06:38:18.760175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.760185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.760193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.760199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.760203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.760209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b1690) 00:27:58.744 [2024-11-20 06:38:18.760216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:58.744 [2024-11-20 06:38:18.760227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113700, cid 4, qid 0 00:27:58.744 [2024-11-20 06:38:18.760424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.744 [2024-11-20 06:38:18.760431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.744 [2024-11-20 06:38:18.760434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.760438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113700) on tqpair=0x20b1690 00:27:58.744 [2024-11-20 06:38:18.760506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.760516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.760524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.760527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b1690) 00:27:58.744 [2024-11-20 06:38:18.760534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.744 [2024-11-20 06:38:18.760545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113700, cid 4, qid 0 00:27:58.744 [2024-11-20 06:38:18.760764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.744 [2024-11-20 06:38:18.760773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.744 [2024-11-20 06:38:18.760779] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.760784] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b1690): datao=0, datal=4096, cccid=4 00:27:58.744 [2024-11-20 06:38:18.760791] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113700) on tqpair(0x20b1690): expected_datao=0, payload_size=4096 00:27:58.744 [2024-11-20 06:38:18.760799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.760833] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.760838] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.761005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.744 [2024-11-20 06:38:18.761011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.744 [2024-11-20 06:38:18.761015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.761019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113700) on tqpair=0x20b1690 00:27:58.744 [2024-11-20 06:38:18.761030] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:27:58.744 [2024-11-20 06:38:18.761040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.761050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.761057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.761060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b1690) 00:27:58.744 [2024-11-20 06:38:18.761067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.744 [2024-11-20 06:38:18.761078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113700, cid 4, qid 0 00:27:58.744 [2024-11-20 06:38:18.765176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.744 [2024-11-20 06:38:18.765190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.744 [2024-11-20 06:38:18.765194] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.765197] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b1690): datao=0, datal=4096, cccid=4 00:27:58.744 [2024-11-20 06:38:18.765202] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113700) on tqpair(0x20b1690): expected_datao=0, payload_size=4096 00:27:58.744 [2024-11-20 06:38:18.765206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.765213] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.765216] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.765222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.744 [2024-11-20 06:38:18.765228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.744 [2024-11-20 06:38:18.765231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.765235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113700) on tqpair=0x20b1690 00:27:58.744 [2024-11-20 06:38:18.765250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.765261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:58.744 [2024-11-20 06:38:18.765269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.744 [2024-11-20 06:38:18.765273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b1690) 00:27:58.745 [2024-11-20 06:38:18.765279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.745 [2024-11-20 06:38:18.765293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113700, cid 4, qid 0 00:27:58.745 [2024-11-20 06:38:18.765492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.745 [2024-11-20 06:38:18.765498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.745 [2024-11-20 06:38:18.765502] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.765506] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b1690): datao=0, datal=4096, cccid=4 00:27:58.745 [2024-11-20 06:38:18.765511] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113700) on tqpair(0x20b1690): expected_datao=0, payload_size=4096 00:27:58.745 [2024-11-20 06:38:18.765515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.765522] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.765525] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.765662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.745 [2024-11-20 06:38:18.765668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.745 [2024-11-20 06:38:18.765671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.765675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113700) on tqpair=0x20b1690 00:27:58.745 [2024-11-20 06:38:18.765684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:58.745 [2024-11-20 06:38:18.765692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:27:58.745 [2024-11-20 06:38:18.765702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:27:58.745 [2024-11-20 06:38:18.765709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:58.745 [2024-11-20 06:38:18.765714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:58.745 [2024-11-20 06:38:18.765722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:27:58.745 [2024-11-20 06:38:18.765728] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:27:58.745 [2024-11-20 06:38:18.765733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:27:58.745 [2024-11-20 06:38:18.765738] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:27:58.745 [2024-11-20 06:38:18.765755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.765759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b1690) 00:27:58.745 [2024-11-20 06:38:18.765765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.745 [2024-11-20 06:38:18.765773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.765776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.765780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20b1690) 00:27:58.745 [2024-11-20 06:38:18.765786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.745 [2024-11-20 06:38:18.765800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113700, cid 4, qid 0 00:27:58.745 [2024-11-20 06:38:18.765805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113880, cid 5, qid 0 00:27:58.745 [2024-11-20 06:38:18.766023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.745 [2024-11-20 06:38:18.766030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.745 [2024-11-20 06:38:18.766034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113700) on tqpair=0x20b1690 00:27:58.745 [2024-11-20 06:38:18.766044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.745 [2024-11-20 06:38:18.766050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.745 [2024-11-20 06:38:18.766054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113880) on tqpair=0x20b1690 00:27:58.745 [2024-11-20 06:38:18.766067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20b1690) 00:27:58.745 [2024-11-20 06:38:18.766077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.745 [2024-11-20 06:38:18.766088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113880, cid 5, qid 0 00:27:58.745 [2024-11-20 06:38:18.766292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.745 [2024-11-20 06:38:18.766300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.745 [2024-11-20 06:38:18.766303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113880) on tqpair=0x20b1690 00:27:58.745 [2024-11-20 06:38:18.766316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20b1690) 00:27:58.745 [2024-11-20 06:38:18.766327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.745 [2024-11-20 06:38:18.766337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113880, cid 5, qid 0 00:27:58.745 [2024-11-20 06:38:18.766560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.745 [2024-11-20 06:38:18.766566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.745 [2024-11-20 06:38:18.766570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113880) on tqpair=0x20b1690 00:27:58.745 [2024-11-20 06:38:18.766583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20b1690) 00:27:58.745 [2024-11-20 06:38:18.766594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.745 [2024-11-20 06:38:18.766604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113880, cid 5, qid 0 00:27:58.745 [2024-11-20 06:38:18.766776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.745 [2024-11-20 06:38:18.766782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.745 [2024-11-20 06:38:18.766785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113880) on tqpair=0x20b1690 00:27:58.745 [2024-11-20 06:38:18.766805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20b1690) 00:27:58.745 [2024-11-20 06:38:18.766816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.745 [2024-11-20 06:38:18.766824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20b1690) 00:27:58.745 [2024-11-20 06:38:18.766834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.745 [2024-11-20 06:38:18.766841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x20b1690) 00:27:58.745 [2024-11-20 06:38:18.766851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.745 [2024-11-20 06:38:18.766859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.766863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20b1690) 00:27:58.745 [2024-11-20 06:38:18.766869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.745 [2024-11-20 06:38:18.766880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113880, cid 5, qid 0 00:27:58.745 [2024-11-20 06:38:18.766885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113700, cid 4, qid 0 00:27:58.745 [2024-11-20 06:38:18.766890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113a00, cid 6, qid 0 00:27:58.745 [2024-11-20 06:38:18.766895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113b80, cid 7, qid 0 00:27:58.745 [2024-11-20 06:38:18.767213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.745 [2024-11-20 06:38:18.767220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.745 [2024-11-20 06:38:18.767224] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.767227] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b1690): datao=0, datal=8192, cccid=5 00:27:58.745 [2024-11-20 06:38:18.767232] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113880) on tqpair(0x20b1690): expected_datao=0, payload_size=8192 00:27:58.745 [2024-11-20 06:38:18.767241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.767316] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.767320] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.767326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.745 [2024-11-20 06:38:18.767332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.745 [2024-11-20 06:38:18.767335] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.767339] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b1690): datao=0, datal=512, cccid=4 00:27:58.745 [2024-11-20 06:38:18.767343] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113700) on tqpair(0x20b1690): expected_datao=0, payload_size=512 00:27:58.745 [2024-11-20 06:38:18.767348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.767354] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.767358] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.745 [2024-11-20 06:38:18.767363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.745 [2024-11-20 06:38:18.767369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.745 [2024-11-20 06:38:18.767373] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767376] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b1690): datao=0, datal=512, cccid=6 00:27:58.746 [2024-11-20 06:38:18.767380] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113a00) on tqpair(0x20b1690): expected_datao=0, payload_size=512 00:27:58.746 [2024-11-20 06:38:18.767385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767391] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767395] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.746 [2024-11-20 06:38:18.767406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.746 [2024-11-20 06:38:18.767410] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767413] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20b1690): datao=0, datal=4096, cccid=7 00:27:58.746 [2024-11-20 06:38:18.767417] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113b80) on tqpair(0x20b1690): expected_datao=0, payload_size=4096 00:27:58.746 [2024-11-20 06:38:18.767422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767433] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767437] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.746 [2024-11-20 06:38:18.767453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.746 [2024-11-20 06:38:18.767457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113880) on tqpair=0x20b1690 00:27:58.746 [2024-11-20 06:38:18.767476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.746 [2024-11-20 06:38:18.767483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.746 [2024-11-20 06:38:18.767486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113700) on tqpair=0x20b1690 00:27:58.746 [2024-11-20 06:38:18.767501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.746 [2024-11-20 06:38:18.767507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.746 [2024-11-20 06:38:18.767511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113a00) on tqpair=0x20b1690 00:27:58.746 [2024-11-20 06:38:18.767524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.746 [2024-11-20 06:38:18.767530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.746 [2024-11-20 06:38:18.767533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.746 [2024-11-20 06:38:18.767537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113b80) on tqpair=0x20b1690 00:27:58.746 ===================================================== 00:27:58.746 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.746 ===================================================== 00:27:58.746 Controller Capabilities/Features 00:27:58.746 ================================ 00:27:58.746 Vendor ID: 8086 00:27:58.746 Subsystem Vendor ID: 8086 00:27:58.746 Serial Number: SPDK00000000000001 00:27:58.746 Model Number: SPDK bdev Controller 00:27:58.746 Firmware Version: 25.01 00:27:58.746 Recommended Arb Burst: 6 00:27:58.746 IEEE OUI Identifier: e4 d2 5c 00:27:58.746 Multi-path I/O 00:27:58.746 May have multiple subsystem ports: Yes 00:27:58.746 May have multiple controllers: Yes 00:27:58.746 Associated with SR-IOV VF: No 00:27:58.746 Max Data Transfer Size: 131072 00:27:58.746 Max Number of Namespaces: 32 00:27:58.746 Max Number of I/O Queues: 127 00:27:58.746 NVMe Specification Version (VS): 1.3 00:27:58.746 NVMe Specification Version (Identify): 1.3 00:27:58.746 Maximum Queue Entries: 128 00:27:58.746 Contiguous Queues Required: Yes 00:27:58.746 Arbitration Mechanisms Supported 00:27:58.746 Weighted Round Robin: Not Supported 00:27:58.746 Vendor Specific: Not Supported 00:27:58.746 Reset Timeout: 15000 ms 00:27:58.746 Doorbell Stride: 4 bytes 00:27:58.746 NVM Subsystem Reset: Not Supported 00:27:58.746 Command Sets Supported 00:27:58.746 NVM Command Set: Supported 00:27:58.746 Boot Partition: Not Supported 00:27:58.746 Memory Page Size Minimum: 4096 bytes 00:27:58.746 Memory Page Size Maximum: 4096 bytes 00:27:58.746 Persistent Memory Region: Not Supported 00:27:58.746 Optional Asynchronous Events Supported 00:27:58.746 Namespace Attribute Notices: Supported 00:27:58.746 Firmware Activation Notices: Not Supported 00:27:58.746 ANA Change Notices: Not Supported 00:27:58.746 PLE Aggregate Log Change Notices: Not Supported 00:27:58.746 LBA Status Info Alert Notices: Not Supported 00:27:58.746 EGE Aggregate Log Change Notices: Not Supported 00:27:58.746 Normal NVM Subsystem Shutdown event: Not Supported 00:27:58.746 Zone Descriptor Change Notices: Not Supported 00:27:58.746 Discovery Log Change Notices: Not Supported 00:27:58.746 Controller Attributes 00:27:58.746 128-bit Host Identifier: Supported 00:27:58.746 Non-Operational Permissive Mode: Not Supported 00:27:58.746 NVM Sets: Not Supported 00:27:58.746 Read Recovery Levels: Not Supported 00:27:58.746 Endurance Groups: Not Supported 00:27:58.746 Predictable Latency Mode: Not Supported 00:27:58.746 Traffic Based Keep ALive: Not Supported 00:27:58.746 Namespace Granularity: Not Supported 00:27:58.746 SQ Associations: Not Supported 00:27:58.746 UUID List: Not Supported 00:27:58.746 Multi-Domain Subsystem: Not Supported 00:27:58.746 Fixed Capacity Management: Not Supported 00:27:58.746 Variable Capacity Management: Not Supported 00:27:58.746 Delete Endurance Group: Not Supported 00:27:58.746 Delete NVM Set: Not Supported 00:27:58.746 Extended LBA Formats Supported: Not Supported 00:27:58.746 Flexible Data Placement Supported: Not Supported 00:27:58.746 00:27:58.746 Controller Memory Buffer Support 00:27:58.746 ================================ 00:27:58.746 Supported: No 00:27:58.746 00:27:58.746 Persistent Memory Region Support 00:27:58.746 ================================ 00:27:58.746 Supported: No 00:27:58.746 00:27:58.746 Admin Command Set Attributes 00:27:58.746 ============================ 00:27:58.746 Security Send/Receive: Not Supported 00:27:58.746 Format NVM: Not Supported 00:27:58.746 Firmware Activate/Download: Not Supported 00:27:58.746 Namespace Management: Not Supported 00:27:58.746 Device Self-Test: Not Supported 00:27:58.746 Directives: Not Supported 00:27:58.746 NVMe-MI: Not Supported 00:27:58.746 Virtualization Management: Not Supported 00:27:58.746 Doorbell Buffer Config: Not Supported 00:27:58.746 Get LBA Status Capability: Not Supported 00:27:58.746 Command & Feature Lockdown Capability: Not Supported 00:27:58.746 Abort Command Limit: 4 00:27:58.746 Async Event Request Limit: 4 00:27:58.746 Number of Firmware Slots: N/A 00:27:58.746 Firmware Slot 1 Read-Only: N/A 00:27:58.746 Firmware Activation Without Reset: N/A 00:27:58.746 Multiple Update Detection Support: N/A 00:27:58.746 Firmware Update Granularity: No Information Provided 00:27:58.746 Per-Namespace SMART Log: No 00:27:58.746 Asymmetric Namespace Access Log Page: Not Supported 00:27:58.746 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:58.746 Command Effects Log Page: Supported 00:27:58.746 Get Log Page Extended Data: Supported 00:27:58.746 Telemetry Log Pages: Not Supported 00:27:58.746 Persistent Event Log Pages: Not Supported 00:27:58.746 Supported Log Pages Log Page: May Support 00:27:58.746 Commands Supported & Effects Log Page: Not Supported 00:27:58.746 Feature Identifiers & Effects Log Page:May Support 00:27:58.746 NVMe-MI Commands & Effects Log Page: May Support 00:27:58.746 Data Area 4 for Telemetry Log: Not Supported 00:27:58.746 Error Log Page Entries Supported: 128 00:27:58.746 Keep Alive: Supported 00:27:58.746 Keep Alive Granularity: 10000 ms 00:27:58.746 00:27:58.746 NVM Command Set Attributes 00:27:58.746 ========================== 00:27:58.746 Submission Queue Entry Size 00:27:58.746 Max: 64 00:27:58.746 Min: 64 00:27:58.746 Completion Queue Entry Size 00:27:58.746 Max: 16 00:27:58.746 Min: 16 00:27:58.746 Number of Namespaces: 32 00:27:58.746 Compare Command: Supported 00:27:58.746 Write Uncorrectable Command: Not Supported 00:27:58.746 Dataset Management Command: Supported 00:27:58.746 Write Zeroes Command: Supported 00:27:58.746 Set Features Save Field: Not Supported 00:27:58.746 Reservations: Supported 00:27:58.746 Timestamp: Not Supported 00:27:58.746 Copy: Supported 00:27:58.746 Volatile Write Cache: Present 00:27:58.746 Atomic Write Unit (Normal): 1 00:27:58.746 Atomic Write Unit (PFail): 1 00:27:58.746 Atomic Compare & Write Unit: 1 00:27:58.746 Fused Compare & Write: Supported 00:27:58.746 Scatter-Gather List 00:27:58.746 SGL Command Set: Supported 00:27:58.746 SGL Keyed: Supported 00:27:58.746 SGL Bit Bucket Descriptor: Not Supported 00:27:58.746 SGL Metadata Pointer: Not Supported 00:27:58.746 Oversized SGL: Not Supported 00:27:58.746 SGL Metadata Address: Not Supported 00:27:58.746 SGL Offset: Supported 00:27:58.746 Transport SGL Data Block: Not Supported 00:27:58.746 Replay Protected Memory Block: Not Supported 00:27:58.746 00:27:58.746 Firmware Slot Information 00:27:58.746 ========================= 00:27:58.746 Active slot: 1 00:27:58.746 Slot 1 Firmware Revision: 25.01 00:27:58.746 00:27:58.747 00:27:58.747 Commands Supported and Effects 00:27:58.747 ============================== 00:27:58.747 Admin Commands 00:27:58.747 -------------- 00:27:58.747 Get Log Page (02h): Supported 00:27:58.747 Identify (06h): Supported 00:27:58.747 Abort (08h): Supported 00:27:58.747 Set Features (09h): Supported 00:27:58.747 Get Features (0Ah): Supported 00:27:58.747 Asynchronous Event Request (0Ch): Supported 00:27:58.747 Keep Alive (18h): Supported 00:27:58.747 I/O Commands 00:27:58.747 ------------ 00:27:58.747 Flush (00h): Supported LBA-Change 00:27:58.747 Write (01h): Supported LBA-Change 00:27:58.747 Read (02h): Supported 00:27:58.747 Compare (05h): Supported 00:27:58.747 Write Zeroes (08h): Supported LBA-Change 00:27:58.747 Dataset Management (09h): Supported LBA-Change 00:27:58.747 Copy (19h): Supported LBA-Change 00:27:58.747 00:27:58.747 Error Log 00:27:58.747 ========= 00:27:58.747 00:27:58.747 Arbitration 00:27:58.747 =========== 00:27:58.747 Arbitration Burst: 1 00:27:58.747 00:27:58.747 Power Management 00:27:58.747 ================ 00:27:58.747 Number of Power States: 1 00:27:58.747 Current Power State: Power State #0 00:27:58.747 Power State #0: 00:27:58.747 Max Power: 0.00 W 00:27:58.747 Non-Operational State: Operational 00:27:58.747 Entry Latency: Not Reported 00:27:58.747 Exit Latency: Not Reported 00:27:58.747 Relative Read Throughput: 0 00:27:58.747 Relative Read Latency: 0 00:27:58.747 Relative Write Throughput: 0 00:27:58.747 Relative Write Latency: 0 00:27:58.747 Idle Power: Not Reported 00:27:58.747 Active Power: Not Reported 00:27:58.747 Non-Operational Permissive Mode: Not Supported 00:27:58.747 00:27:58.747 Health Information 00:27:58.747 ================== 00:27:58.747 Critical Warnings: 00:27:58.747 Available Spare Space: OK 00:27:58.747 Temperature: OK 00:27:58.747 Device Reliability: OK 00:27:58.747 Read Only: No 00:27:58.747 Volatile Memory Backup: OK 00:27:58.747 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:58.747 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:58.747 Available Spare: 0% 00:27:58.747 Available Spare Threshold: 0% 00:27:58.747 Life Percentage Used:[2024-11-20 06:38:18.767644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.767649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20b1690) 00:27:58.747 [2024-11-20 06:38:18.767656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.747 [2024-11-20 06:38:18.767669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113b80, cid 7, qid 0 00:27:58.747 [2024-11-20 06:38:18.767876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.747 [2024-11-20 06:38:18.767883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.747 [2024-11-20 06:38:18.767887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.767891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113b80) on tqpair=0x20b1690 00:27:58.747 [2024-11-20 06:38:18.767925] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:27:58.747 [2024-11-20 06:38:18.767936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113100) on tqpair=0x20b1690 00:27:58.747 [2024-11-20 06:38:18.767942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.747 [2024-11-20 06:38:18.767948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113280) on tqpair=0x20b1690 00:27:58.747 [2024-11-20 06:38:18.767952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.747 [2024-11-20 06:38:18.767958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113400) on tqpair=0x20b1690 00:27:58.747 [2024-11-20 06:38:18.767962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.747 [2024-11-20 06:38:18.767967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113580) on tqpair=0x20b1690 00:27:58.747 [2024-11-20 06:38:18.767972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.747 [2024-11-20 06:38:18.767980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.767984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.767988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b1690) 00:27:58.747 [2024-11-20 06:38:18.767995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.747 [2024-11-20 06:38:18.768008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113580, cid 3, qid 0 00:27:58.747 [2024-11-20 06:38:18.768248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.747 [2024-11-20 06:38:18.768255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.747 [2024-11-20 06:38:18.768258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.768262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113580) on tqpair=0x20b1690 00:27:58.747 [2024-11-20 06:38:18.768269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.768273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.768277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b1690) 00:27:58.747 [2024-11-20 06:38:18.768284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.747 [2024-11-20 06:38:18.768300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113580, cid 3, qid 0 00:27:58.747 [2024-11-20 06:38:18.768479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.747 [2024-11-20 06:38:18.768485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.747 [2024-11-20 06:38:18.768489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.768493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113580) on tqpair=0x20b1690 00:27:58.747 [2024-11-20 06:38:18.768498] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:27:58.747 [2024-11-20 06:38:18.768502] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:27:58.747 [2024-11-20 06:38:18.768512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.768516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.768519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b1690) 00:27:58.747 [2024-11-20 06:38:18.768526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.747 [2024-11-20 06:38:18.768537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113580, cid 3, qid 0 00:27:58.747 [2024-11-20 06:38:18.768705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.747 [2024-11-20 06:38:18.768712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.747 [2024-11-20 06:38:18.768715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.768719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113580) on tqpair=0x20b1690 00:27:58.747 [2024-11-20 06:38:18.768729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.768733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.747 [2024-11-20 06:38:18.768737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b1690) 00:27:58.747 [2024-11-20 06:38:18.768743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.747 [2024-11-20 06:38:18.768754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113580, cid 3, qid 0 00:27:58.747 [2024-11-20 06:38:18.768960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.747 [2024-11-20 06:38:18.768966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.747 [2024-11-20 06:38:18.768970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.748 [2024-11-20 06:38:18.768973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113580) on tqpair=0x20b1690 00:27:58.748 [2024-11-20 06:38:18.768984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.748 [2024-11-20 06:38:18.768988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.748 [2024-11-20 06:38:18.768991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b1690) 00:27:58.748 [2024-11-20 06:38:18.768998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.748 [2024-11-20 06:38:18.769008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113580, cid 3, qid 0 00:27:58.748 [2024-11-20 06:38:18.773175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.748 [2024-11-20 06:38:18.773187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.748 [2024-11-20 06:38:18.773191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.748 [2024-11-20 06:38:18.773195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113580) on tqpair=0x20b1690 00:27:58.748 [2024-11-20 06:38:18.773205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.748 [2024-11-20 06:38:18.773209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.748 [2024-11-20 06:38:18.773213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20b1690) 00:27:58.748 [2024-11-20 06:38:18.773226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.748 [2024-11-20 06:38:18.773239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113580, cid 3, qid 0 00:27:58.748 [2024-11-20 06:38:18.773429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.748 [2024-11-20 06:38:18.773435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.748 [2024-11-20 06:38:18.773439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.748 [2024-11-20 06:38:18.773443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113580) on tqpair=0x20b1690 00:27:58.748 [2024-11-20 06:38:18.773451] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:27:58.748 0% 00:27:58.748 Data Units Read: 0 00:27:58.748 Data Units Written: 0 00:27:58.748 Host Read Commands: 0 00:27:58.748 Host Write Commands: 0 00:27:58.748 Controller Busy Time: 0 minutes 00:27:58.748 Power Cycles: 0 00:27:58.748 Power On Hours: 0 hours 00:27:58.748 Unsafe Shutdowns: 0 00:27:58.748 Unrecoverable Media Errors: 0 00:27:58.748 Lifetime Error Log Entries: 0 00:27:58.748 Warning Temperature Time: 0 minutes 00:27:58.748 Critical Temperature Time: 0 minutes 00:27:58.748 00:27:58.748 Number of Queues 00:27:58.748 ================ 00:27:58.748 Number of I/O Submission Queues: 127 00:27:58.748 Number of I/O Completion Queues: 127 00:27:58.748 00:27:58.748 Active Namespaces 00:27:58.748 ================= 00:27:58.748 Namespace ID:1 00:27:58.748 Error Recovery Timeout: Unlimited 00:27:58.748 Command Set Identifier: NVM (00h) 00:27:58.748 Deallocate: Supported 00:27:58.748 Deallocated/Unwritten Error: Not Supported 00:27:58.748 Deallocated Read Value: Unknown 00:27:58.748 Deallocate in Write Zeroes: Not Supported 00:27:58.748 Deallocated Guard Field: 0xFFFF 00:27:58.748 Flush: Supported 00:27:58.748 Reservation: Supported 00:27:58.748 Namespace Sharing Capabilities: Multiple Controllers 00:27:58.748 Size (in LBAs): 131072 (0GiB) 00:27:58.748 Capacity (in LBAs): 131072 (0GiB) 00:27:58.748 Utilization (in LBAs): 131072 (0GiB) 00:27:58.748 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:58.748 EUI64: ABCDEF0123456789 00:27:58.748 UUID: 6b5c10f1-92ea-450e-82db-c03c92669e52 00:27:58.748 Thin Provisioning: Not Supported 00:27:58.748 Per-NS Atomic Units: Yes 00:27:58.748 Atomic Boundary Size (Normal): 0 00:27:58.748 Atomic Boundary Size (PFail): 0 00:27:58.748 Atomic Boundary Offset: 0 00:27:58.748 Maximum Single Source Range Length: 65535 00:27:58.748 Maximum Copy Length: 65535 00:27:58.748 Maximum Source Range Count: 1 00:27:58.748 NGUID/EUI64 Never Reused: No 00:27:58.748 Namespace Write Protected: No 00:27:58.748 Number of LBA Formats: 1 00:27:58.748 Current LBA Format: LBA Format #00 00:27:58.748 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:58.748 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:58.748 rmmod nvme_tcp 00:27:58.748 rmmod nvme_fabrics 00:27:58.748 rmmod nvme_keyring 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2932280 ']' 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2932280 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 2932280 ']' 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 2932280 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2932280 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2932280' 00:27:58.748 killing process with pid 2932280 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 2932280 00:27:58.748 06:38:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 2932280 00:27:59.009 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:59.009 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:59.009 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:59.009 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:27:59.009 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:27:59.009 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:27:59.009 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:59.009 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:59.009 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:59.009 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.010 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.010 06:38:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:01.559 00:28:01.559 real 0m11.623s 00:28:01.559 user 0m8.461s 00:28:01.559 sys 0m6.155s 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:01.559 ************************************ 00:28:01.559 END TEST nvmf_identify 00:28:01.559 ************************************ 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.559 ************************************ 00:28:01.559 START TEST nvmf_perf 00:28:01.559 ************************************ 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:01.559 * Looking for test storage... 00:28:01.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:01.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.559 --rc genhtml_branch_coverage=1 00:28:01.559 --rc genhtml_function_coverage=1 00:28:01.559 --rc genhtml_legend=1 00:28:01.559 --rc geninfo_all_blocks=1 00:28:01.559 --rc geninfo_unexecuted_blocks=1 00:28:01.559 00:28:01.559 ' 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:01.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.559 --rc genhtml_branch_coverage=1 00:28:01.559 --rc genhtml_function_coverage=1 00:28:01.559 --rc genhtml_legend=1 00:28:01.559 --rc geninfo_all_blocks=1 00:28:01.559 --rc geninfo_unexecuted_blocks=1 00:28:01.559 00:28:01.559 ' 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:01.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.559 --rc genhtml_branch_coverage=1 00:28:01.559 --rc genhtml_function_coverage=1 00:28:01.559 --rc genhtml_legend=1 00:28:01.559 --rc geninfo_all_blocks=1 00:28:01.559 --rc geninfo_unexecuted_blocks=1 00:28:01.559 00:28:01.559 ' 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:01.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.559 --rc genhtml_branch_coverage=1 00:28:01.559 --rc genhtml_function_coverage=1 00:28:01.559 --rc genhtml_legend=1 00:28:01.559 --rc geninfo_all_blocks=1 00:28:01.559 --rc geninfo_unexecuted_blocks=1 00:28:01.559 00:28:01.559 ' 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.559 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:01.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:01.560 06:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:09.703 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:09.703 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.703 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:09.704 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:09.704 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:09.704 06:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:09.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:28:09.704 00:28:09.704 --- 10.0.0.2 ping statistics --- 00:28:09.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.704 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:28:09.704 00:28:09.704 --- 10.0.0.1 ping statistics --- 00:28:09.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.704 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2936800 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2936800 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 2936800 ']' 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:09.704 06:38:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:09.704 [2024-11-20 06:38:29.198887] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:28:09.704 [2024-11-20 06:38:29.198951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.704 [2024-11-20 06:38:29.300086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.704 [2024-11-20 06:38:29.353549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.704 [2024-11-20 06:38:29.353607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.704 [2024-11-20 06:38:29.353616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.704 [2024-11-20 06:38:29.353624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.704 [2024-11-20 06:38:29.353635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.704 [2024-11-20 06:38:29.355722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.704 [2024-11-20 06:38:29.355880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.704 [2024-11-20 06:38:29.356050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.704 [2024-11-20 06:38:29.356048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.966 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:09.966 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:28:09.966 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:09.966 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:09.966 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:09.966 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.966 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:09.966 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:10.539 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:10.539 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:10.539 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:28:10.539 06:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:10.801 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:10.801 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:28:10.801 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:10.801 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:10.801 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:11.063 [2024-11-20 06:38:31.187834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.063 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:11.326 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:11.326 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:11.587 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:11.587 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:11.587 06:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.846 [2024-11-20 06:38:31.983169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.846 06:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:12.106 06:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:28:12.106 06:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:12.106 06:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:12.106 06:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:13.488 Initializing NVMe Controllers 00:28:13.488 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:28:13.488 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:28:13.488 Initialization complete. Launching workers. 00:28:13.488 ======================================================== 00:28:13.488 Latency(us) 00:28:13.488 Device Information : IOPS MiB/s Average min max 00:28:13.488 PCIE (0000:65:00.0) NSID 1 from core 0: 78531.15 306.76 406.83 13.36 5465.58 00:28:13.488 ======================================================== 00:28:13.488 Total : 78531.15 306.76 406.83 13.36 5465.58 00:28:13.488 00:28:13.488 06:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:14.871 Initializing NVMe Controllers 00:28:14.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:14.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:14.871 Initialization complete. Launching workers. 00:28:14.871 ======================================================== 00:28:14.871 Latency(us) 00:28:14.871 Device Information : IOPS MiB/s Average min max 00:28:14.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 108.00 0.42 9414.34 108.83 45729.76 00:28:14.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17957.14 6992.84 47895.00 00:28:14.871 ======================================================== 00:28:14.871 Total : 164.00 0.64 12331.39 108.83 47895.00 00:28:14.871 00:28:14.871 06:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:15.811 Initializing NVMe Controllers 00:28:15.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:15.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:15.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:15.811 Initialization complete. Launching workers. 00:28:15.811 ======================================================== 00:28:15.811 Latency(us) 00:28:15.811 Device Information : IOPS MiB/s Average min max 00:28:15.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11887.00 46.43 2694.41 502.26 6295.64 00:28:15.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3886.00 15.18 8279.39 5329.06 17369.04 00:28:15.811 ======================================================== 00:28:15.811 Total : 15773.00 61.61 4070.39 502.26 17369.04 00:28:15.811 00:28:15.811 06:38:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:15.811 06:38:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:15.811 06:38:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:18.351 Initializing NVMe Controllers 00:28:18.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.351 Controller IO queue size 128, less than required. 00:28:18.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:18.351 Controller IO queue size 128, less than required. 00:28:18.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:18.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:18.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:18.351 Initialization complete. Launching workers. 00:28:18.351 ======================================================== 00:28:18.351 Latency(us) 00:28:18.351 Device Information : IOPS MiB/s Average min max 00:28:18.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1729.34 432.34 74746.62 41492.50 116917.06 00:28:18.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 617.76 154.44 222247.86 78456.77 331800.14 00:28:18.351 ======================================================== 00:28:18.351 Total : 2347.10 586.78 113569.35 41492.50 331800.14 00:28:18.351 00:28:18.351 06:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:18.611 No valid NVMe controllers or AIO or URING devices found 00:28:18.611 Initializing NVMe Controllers 00:28:18.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.611 Controller IO queue size 128, less than required. 00:28:18.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:18.611 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:18.611 Controller IO queue size 128, less than required. 00:28:18.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:18.611 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:18.611 WARNING: Some requested NVMe devices were skipped 00:28:18.611 06:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:21.152 Initializing NVMe Controllers 00:28:21.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.152 Controller IO queue size 128, less than required. 00:28:21.152 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:21.152 Controller IO queue size 128, less than required. 00:28:21.152 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:21.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:21.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:21.152 Initialization complete. Launching workers. 00:28:21.152 00:28:21.152 ==================== 00:28:21.152 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:21.152 TCP transport: 00:28:21.152 polls: 38384 00:28:21.152 idle_polls: 22606 00:28:21.152 sock_completions: 15778 00:28:21.152 nvme_completions: 7385 00:28:21.152 submitted_requests: 11016 00:28:21.152 queued_requests: 1 00:28:21.152 00:28:21.152 ==================== 00:28:21.152 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:21.152 TCP transport: 00:28:21.152 polls: 37517 00:28:21.152 idle_polls: 22245 00:28:21.152 sock_completions: 15272 00:28:21.152 nvme_completions: 7459 00:28:21.152 submitted_requests: 11134 00:28:21.152 queued_requests: 1 00:28:21.152 ======================================================== 00:28:21.152 Latency(us) 00:28:21.152 Device Information : IOPS MiB/s Average min max 00:28:21.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1845.99 461.50 70649.33 29920.78 128068.85 00:28:21.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1864.49 466.12 69126.73 25147.77 123135.12 00:28:21.152 ======================================================== 00:28:21.152 Total : 3710.47 927.62 69884.23 25147.77 128068.85 00:28:21.152 00:28:21.152 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:21.152 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:21.414 rmmod nvme_tcp 00:28:21.414 rmmod nvme_fabrics 00:28:21.414 rmmod nvme_keyring 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2936800 ']' 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2936800 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 2936800 ']' 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 2936800 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2936800 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2936800' 00:28:21.414 killing process with pid 2936800 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 2936800 00:28:21.414 06:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 2936800 00:28:23.325 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.325 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.325 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.325 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:28:23.325 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:28:23.325 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.586 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.586 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.586 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.586 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.586 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.586 06:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.500 06:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.500 00:28:25.500 real 0m24.367s 00:28:25.500 user 0m58.458s 00:28:25.500 sys 0m8.766s 00:28:25.500 06:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:25.500 06:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:25.500 ************************************ 00:28:25.500 END TEST nvmf_perf 00:28:25.500 ************************************ 00:28:25.500 06:38:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:25.500 06:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:25.500 06:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:25.500 06:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.500 ************************************ 00:28:25.500 START TEST nvmf_fio_host 00:28:25.500 ************************************ 00:28:25.500 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:25.763 * Looking for test storage... 00:28:25.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.763 --rc genhtml_branch_coverage=1 00:28:25.763 --rc genhtml_function_coverage=1 00:28:25.763 --rc genhtml_legend=1 00:28:25.763 --rc geninfo_all_blocks=1 00:28:25.763 --rc geninfo_unexecuted_blocks=1 00:28:25.763 00:28:25.763 ' 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.763 --rc genhtml_branch_coverage=1 00:28:25.763 --rc genhtml_function_coverage=1 00:28:25.763 --rc genhtml_legend=1 00:28:25.763 --rc geninfo_all_blocks=1 00:28:25.763 --rc geninfo_unexecuted_blocks=1 00:28:25.763 00:28:25.763 ' 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.763 --rc genhtml_branch_coverage=1 00:28:25.763 --rc genhtml_function_coverage=1 00:28:25.763 --rc genhtml_legend=1 00:28:25.763 --rc geninfo_all_blocks=1 00:28:25.763 --rc geninfo_unexecuted_blocks=1 00:28:25.763 00:28:25.763 ' 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.763 --rc genhtml_branch_coverage=1 00:28:25.763 --rc genhtml_function_coverage=1 00:28:25.763 --rc genhtml_legend=1 00:28:25.763 --rc geninfo_all_blocks=1 00:28:25.763 --rc geninfo_unexecuted_blocks=1 00:28:25.763 00:28:25.763 ' 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.763 06:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.763 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:25.763 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:25.763 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:25.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.764 06:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.900 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.900 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:33.901 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:33.901 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:33.901 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:33.901 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:28:33.901 00:28:33.901 --- 10.0.0.2 ping statistics --- 00:28:33.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.901 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:28:33.901 00:28:33.901 --- 10.0.0.1 ping statistics --- 00:28:33.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.901 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.901 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2943859 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2943859 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 2943859 ']' 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:33.902 06:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.902 [2024-11-20 06:38:53.573166] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:28:33.902 [2024-11-20 06:38:53.573238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.902 [2024-11-20 06:38:53.675779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.902 [2024-11-20 06:38:53.728461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.902 [2024-11-20 06:38:53.728516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.902 [2024-11-20 06:38:53.728525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.902 [2024-11-20 06:38:53.728532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.902 [2024-11-20 06:38:53.728538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.902 [2024-11-20 06:38:53.730517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.902 [2024-11-20 06:38:53.730676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.902 [2024-11-20 06:38:53.730840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.902 [2024-11-20 06:38:53.730840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.163 06:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:34.163 06:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:28:34.163 06:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:34.424 [2024-11-20 06:38:54.569260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.424 06:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:34.424 06:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:34.424 06:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.424 06:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:34.686 Malloc1 00:28:34.686 06:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.947 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:35.208 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.208 [2024-11-20 06:38:55.434823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.208 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:35.469 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:35.470 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:35.470 06:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:36.056 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:36.056 fio-3.35 00:28:36.056 Starting 1 thread 00:28:38.596 00:28:38.596 test: (groupid=0, jobs=1): err= 0: pid=2944404: Wed Nov 20 06:38:58 2024 00:28:38.596 read: IOPS=13.9k, BW=54.2MiB/s (56.9MB/s)(109MiB/2004msec) 00:28:38.596 slat (usec): min=2, max=290, avg= 2.14, stdev= 2.43 00:28:38.596 clat (usec): min=3151, max=8955, avg=5066.04, stdev=356.05 00:28:38.596 lat (usec): min=3153, max=8957, avg=5068.18, stdev=356.14 00:28:38.596 clat percentiles (usec): 00:28:38.596 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:28:38.596 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5145], 00:28:38.596 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:28:38.596 | 99.00th=[ 5866], 99.50th=[ 6128], 99.90th=[ 7308], 99.95th=[ 8029], 00:28:38.596 | 99.99th=[ 8848] 00:28:38.596 bw ( KiB/s): min=54176, max=56048, per=99.95%, avg=55514.00, stdev=897.32, samples=4 00:28:38.596 iops : min=13544, max=14012, avg=13878.50, stdev=224.33, samples=4 00:28:38.596 write: IOPS=13.9k, BW=54.3MiB/s (56.9MB/s)(109MiB/2004msec); 0 zone resets 00:28:38.596 slat (usec): min=2, max=272, avg= 2.21, stdev= 1.79 00:28:38.596 clat (usec): min=2572, max=8418, avg=4092.23, stdev=305.53 00:28:38.596 lat (usec): min=2574, max=8420, avg=4094.44, stdev=305.67 00:28:38.596 clat percentiles (usec): 00:28:38.596 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:28:38.596 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:28:38.596 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:28:38.596 | 99.00th=[ 4752], 99.50th=[ 5211], 99.90th=[ 6063], 99.95th=[ 7701], 00:28:38.596 | 99.99th=[ 8356] 00:28:38.596 bw ( KiB/s): min=54576, max=56024, per=99.99%, avg=55564.00, stdev=665.32, samples=4 00:28:38.596 iops : min=13644, max=14006, avg=13891.00, stdev=166.33, samples=4 00:28:38.596 lat (msec) : 4=18.41%, 10=81.59% 00:28:38.596 cpu : usr=73.09%, sys=25.81%, ctx=30, majf=0, minf=17 00:28:38.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:38.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:38.596 issued rwts: total=27827,27839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:38.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:38.596 00:28:38.596 Run status group 0 (all jobs): 00:28:38.596 READ: bw=54.2MiB/s (56.9MB/s), 54.2MiB/s-54.2MiB/s (56.9MB/s-56.9MB/s), io=109MiB (114MB), run=2004-2004msec 00:28:38.596 WRITE: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=109MiB (114MB), run=2004-2004msec 00:28:38.596 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:38.596 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:38.597 06:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:38.597 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:38.597 fio-3.35 00:28:38.597 Starting 1 thread 00:28:41.135 00:28:41.135 test: (groupid=0, jobs=1): err= 0: pid=2945222: Wed Nov 20 06:39:01 2024 00:28:41.135 read: IOPS=9440, BW=148MiB/s (155MB/s)(302MiB/2045msec) 00:28:41.135 slat (usec): min=3, max=110, avg= 3.59, stdev= 1.59 00:28:41.135 clat (usec): min=1314, max=51819, avg=8200.95, stdev=3157.66 00:28:41.135 lat (usec): min=1318, max=51823, avg=8204.55, stdev=3157.74 00:28:41.135 clat percentiles (usec): 00:28:41.135 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6259], 00:28:41.135 | 30.00th=[ 6783], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8455], 00:28:41.135 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[10814], 95.00th=[11469], 00:28:41.135 | 99.00th=[13173], 99.50th=[14091], 99.90th=[50594], 99.95th=[51119], 00:28:41.135 | 99.99th=[51643] 00:28:41.135 bw ( KiB/s): min=66944, max=91200, per=50.55%, avg=76352.00, stdev=10435.96, samples=4 00:28:41.135 iops : min= 4184, max= 5700, avg=4772.00, stdev=652.25, samples=4 00:28:41.135 write: IOPS=5557, BW=86.8MiB/s (91.1MB/s)(156MiB/1794msec); 0 zone resets 00:28:41.135 slat (usec): min=39, max=457, avg=40.98, stdev= 8.31 00:28:41.135 clat (usec): min=1899, max=55856, avg=9225.17, stdev=3281.72 00:28:41.135 lat (usec): min=1939, max=55896, avg=9266.15, stdev=3282.51 00:28:41.135 clat percentiles (usec): 00:28:41.135 | 1.00th=[ 6390], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7832], 00:28:41.135 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:28:41.135 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:28:41.135 | 99.00th=[13435], 99.50th=[44303], 99.90th=[54264], 99.95th=[55313], 00:28:41.135 | 99.99th=[55837] 00:28:41.135 bw ( KiB/s): min=70560, max=94208, per=89.21%, avg=79336.00, stdev=10415.18, samples=4 00:28:41.135 iops : min= 4410, max= 5888, avg=4958.50, stdev=650.95, samples=4 00:28:41.135 lat (msec) : 2=0.05%, 4=0.46%, 10=78.18%, 20=20.88%, 50=0.25% 00:28:41.135 lat (msec) : 100=0.18% 00:28:41.135 cpu : usr=86.15%, sys=12.72%, ctx=18, majf=0, minf=31 00:28:41.135 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:41.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.135 issued rwts: total=19306,9971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.135 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.135 00:28:41.135 Run status group 0 (all jobs): 00:28:41.135 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=302MiB (316MB), run=2045-2045msec 00:28:41.135 WRITE: bw=86.8MiB/s (91.1MB/s), 86.8MiB/s-86.8MiB/s (91.1MB/s-91.1MB/s), io=156MiB (163MB), run=1794-1794msec 00:28:41.135 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.396 rmmod nvme_tcp 00:28:41.396 rmmod nvme_fabrics 00:28:41.396 rmmod nvme_keyring 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2943859 ']' 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2943859 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 2943859 ']' 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 2943859 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2943859 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2943859' 00:28:41.396 killing process with pid 2943859 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 2943859 00:28:41.396 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 2943859 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.657 06:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.569 06:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.569 00:28:43.569 real 0m17.992s 00:28:43.569 user 1m7.601s 00:28:43.569 sys 0m7.668s 00:28:43.569 06:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:43.569 06:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.569 ************************************ 00:28:43.569 END TEST nvmf_fio_host 00:28:43.569 ************************************ 00:28:43.569 06:39:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:43.569 06:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:43.569 06:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:43.569 06:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.830 ************************************ 00:28:43.830 START TEST nvmf_failover 00:28:43.830 ************************************ 00:28:43.830 06:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:43.830 * Looking for test storage... 00:28:43.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:43.830 06:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:43.830 06:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:28:43.830 06:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:43.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.830 --rc genhtml_branch_coverage=1 00:28:43.830 --rc genhtml_function_coverage=1 00:28:43.830 --rc genhtml_legend=1 00:28:43.830 --rc geninfo_all_blocks=1 00:28:43.830 --rc geninfo_unexecuted_blocks=1 00:28:43.830 00:28:43.830 ' 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:43.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.830 --rc genhtml_branch_coverage=1 00:28:43.830 --rc genhtml_function_coverage=1 00:28:43.830 --rc genhtml_legend=1 00:28:43.830 --rc geninfo_all_blocks=1 00:28:43.830 --rc geninfo_unexecuted_blocks=1 00:28:43.830 00:28:43.830 ' 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:43.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.830 --rc genhtml_branch_coverage=1 00:28:43.830 --rc genhtml_function_coverage=1 00:28:43.830 --rc genhtml_legend=1 00:28:43.830 --rc geninfo_all_blocks=1 00:28:43.830 --rc geninfo_unexecuted_blocks=1 00:28:43.830 00:28:43.830 ' 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:43.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.830 --rc genhtml_branch_coverage=1 00:28:43.830 --rc genhtml_function_coverage=1 00:28:43.830 --rc genhtml_legend=1 00:28:43.830 --rc geninfo_all_blocks=1 00:28:43.830 --rc geninfo_unexecuted_blocks=1 00:28:43.830 00:28:43.830 ' 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.830 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:43.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.831 06:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.967 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:51.968 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:51.968 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:51.968 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:51.968 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:28:51.968 00:28:51.968 --- 10.0.0.2 ping statistics --- 00:28:51.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.968 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:28:51.968 00:28:51.968 --- 10.0.0.1 ping statistics --- 00:28:51.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.968 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2950445 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2950445 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2950445 ']' 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:51.968 06:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:51.968 [2024-11-20 06:39:11.718311] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:28:51.969 [2024-11-20 06:39:11.718376] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.969 [2024-11-20 06:39:11.820554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:51.969 [2024-11-20 06:39:11.871814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.969 [2024-11-20 06:39:11.871861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.969 [2024-11-20 06:39:11.871870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.969 [2024-11-20 06:39:11.871878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.969 [2024-11-20 06:39:11.871885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.969 [2024-11-20 06:39:11.873698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.969 [2024-11-20 06:39:11.873860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.969 [2024-11-20 06:39:11.873860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.540 06:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:52.540 06:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:28:52.540 06:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.540 06:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:52.540 06:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:52.540 06:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.540 06:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:52.540 [2024-11-20 06:39:12.754815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.540 06:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:52.801 Malloc0 00:28:52.801 06:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:53.062 06:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:53.323 06:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:53.323 [2024-11-20 06:39:13.573968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.584 06:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:53.584 [2024-11-20 06:39:13.770576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:53.584 06:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:53.846 [2024-11-20 06:39:13.967240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:53.846 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2950815 00:28:53.846 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:53.846 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:53.846 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2950815 /var/tmp/bdevperf.sock 00:28:53.846 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2950815 ']' 00:28:53.846 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:53.846 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:53.846 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:53.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:53.846 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:53.846 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:54.790 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:54.790 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:28:54.790 06:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:55.050 NVMe0n1 00:28:55.050 06:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:55.312 00:28:55.312 06:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2951154 00:28:55.312 06:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:28:55.312 06:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:56.256 06:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:56.518 [2024-11-20 06:39:16.662819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.662999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.518 [2024-11-20 06:39:16.663226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 [2024-11-20 06:39:16.663435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49ed0 is same with the state(6) to be set 00:28:56.519 06:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:28:59.820 06:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:59.820 00:28:59.820 06:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:00.082 [2024-11-20 06:39:20.152146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 [2024-11-20 06:39:20.152363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4acf0 is same with the state(6) to be set 00:29:00.082 06:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:03.382 06:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.382 [2024-11-20 06:39:23.332006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.382 06:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:04.325 06:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:04.325 [2024-11-20 06:39:24.521387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.325 [2024-11-20 06:39:24.521698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 [2024-11-20 06:39:24.521870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bbf0 is same with the state(6) to be set 00:29:04.326 06:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2951154 00:29:10.920 { 00:29:10.920 "results": [ 00:29:10.920 { 00:29:10.920 "job": "NVMe0n1", 00:29:10.920 "core_mask": "0x1", 00:29:10.920 "workload": "verify", 00:29:10.920 "status": "finished", 00:29:10.920 "verify_range": { 00:29:10.920 "start": 0, 00:29:10.920 "length": 16384 00:29:10.920 }, 00:29:10.920 "queue_depth": 128, 00:29:10.920 "io_size": 4096, 00:29:10.920 "runtime": 15.005562, 00:29:10.920 "iops": 12254.522689653344, 00:29:10.920 "mibps": 47.869229256458375, 00:29:10.920 "io_failed": 11508, 00:29:10.920 "io_timeout": 0, 00:29:10.920 "avg_latency_us": 9808.739175750876, 00:29:10.920 "min_latency_us": 641.7066666666667, 00:29:10.920 "max_latency_us": 23374.506666666668 00:29:10.920 } 00:29:10.920 ], 00:29:10.920 "core_count": 1 00:29:10.920 } 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2950815 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2950815 ']' 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2950815 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2950815 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2950815' 00:29:10.920 killing process with pid 2950815 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2950815 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2950815 00:29:10.920 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:10.920 [2024-11-20 06:39:14.052932] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:29:10.920 [2024-11-20 06:39:14.053009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950815 ] 00:29:10.920 [2024-11-20 06:39:14.148191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.920 [2024-11-20 06:39:14.201670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.920 Running I/O for 15 seconds... 00:29:10.920 11131.00 IOPS, 43.48 MiB/s [2024-11-20T05:39:31.199Z] [2024-11-20 06:39:16.665294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.920 [2024-11-20 06:39:16.665765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.920 [2024-11-20 06:39:16.665772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.665987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.665994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.921 [2024-11-20 06:39:16.666431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.921 [2024-11-20 06:39:16.666450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.921 [2024-11-20 06:39:16.666460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.666974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.922 [2024-11-20 06:39:16.666982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.667003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.922 [2024-11-20 06:39:16.667011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:29:10.922 [2024-11-20 06:39:16.667018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.667029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.922 [2024-11-20 06:39:16.667035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.922 [2024-11-20 06:39:16.667041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:29:10.922 [2024-11-20 06:39:16.667049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.667058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.922 [2024-11-20 06:39:16.667064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.922 [2024-11-20 06:39:16.667070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:29:10.922 [2024-11-20 06:39:16.667077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.667085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.922 [2024-11-20 06:39:16.667091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.922 [2024-11-20 06:39:16.667098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:29:10.922 [2024-11-20 06:39:16.667105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.667112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.922 [2024-11-20 06:39:16.667118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.922 [2024-11-20 06:39:16.667124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:29:10.922 [2024-11-20 06:39:16.667131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.922 [2024-11-20 06:39:16.667139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.922 [2024-11-20 06:39:16.667144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96768 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96776 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96784 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.667579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.667585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.667593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96792 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.667600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.680530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.680557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.680568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96800 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.680577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.680585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.680592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.680598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96808 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.680606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.680614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.680619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.680625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96816 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.680634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.680642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.680652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.680658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96824 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.680665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.680673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.680679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.680685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96832 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.680692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.680700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.923 [2024-11-20 06:39:16.680707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.923 [2024-11-20 06:39:16.680713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96840 len:8 PRP1 0x0 PRP2 0x0 00:29:10.923 [2024-11-20 06:39:16.680720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.923 [2024-11-20 06:39:16.680728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.924 [2024-11-20 06:39:16.680733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.924 [2024-11-20 06:39:16.680739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96848 len:8 PRP1 0x0 PRP2 0x0 00:29:10.924 [2024-11-20 06:39:16.680747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:16.680755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.924 [2024-11-20 06:39:16.680760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.924 [2024-11-20 06:39:16.680766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96856 len:8 PRP1 0x0 PRP2 0x0 00:29:10.924 [2024-11-20 06:39:16.680773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:16.680781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.924 [2024-11-20 06:39:16.680787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.924 [2024-11-20 06:39:16.680794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96864 len:8 PRP1 0x0 PRP2 0x0 00:29:10.924 [2024-11-20 06:39:16.680801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:16.680808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.924 [2024-11-20 06:39:16.680814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.924 [2024-11-20 06:39:16.680820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96872 len:8 PRP1 0x0 PRP2 0x0 00:29:10.924 [2024-11-20 06:39:16.680827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:16.680874] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:10.924 [2024-11-20 06:39:16.680905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.924 [2024-11-20 06:39:16.680916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:16.680929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.924 [2024-11-20 06:39:16.680936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:16.680945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.924 [2024-11-20 06:39:16.680953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:16.680961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.924 [2024-11-20 06:39:16.680969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:16.680982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:10.924 [2024-11-20 06:39:16.681026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b1d70 (9): Bad file descriptor 00:29:10.924 [2024-11-20 06:39:16.684595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:10.924 [2024-11-20 06:39:16.885880] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:29:10.924 9979.50 IOPS, 38.98 MiB/s [2024-11-20T05:39:31.203Z] 10360.00 IOPS, 40.47 MiB/s [2024-11-20T05:39:31.203Z] 10895.25 IOPS, 42.56 MiB/s [2024-11-20T05:39:31.203Z] [2024-11-20 06:39:20.153747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.153992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.153997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.154003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.924 [2024-11-20 06:39:20.154008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.924 [2024-11-20 06:39:20.154015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.925 [2024-11-20 06:39:20.154373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.925 [2024-11-20 06:39:20.154521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.925 [2024-11-20 06:39:20.154526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.154990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.154997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.926 [2024-11-20 06:39:20.155004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.926 [2024-11-20 06:39:20.155010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.927 [2024-11-20 06:39:20.155155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58440 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58448 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58456 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58464 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58472 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58480 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58488 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58496 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58504 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58512 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58520 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58528 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58536 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58544 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58552 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.155469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.927 [2024-11-20 06:39:20.155473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58560 len:8 PRP1 0x0 PRP2 0x0 00:29:10.927 [2024-11-20 06:39:20.155478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.927 [2024-11-20 06:39:20.155484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.927 [2024-11-20 06:39:20.166368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.928 [2024-11-20 06:39:20.166396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58568 len:8 PRP1 0x0 PRP2 0x0 00:29:10.928 [2024-11-20 06:39:20.166406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:20.166452] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:10.928 [2024-11-20 06:39:20.166482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.928 [2024-11-20 06:39:20.166490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:20.166500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.928 [2024-11-20 06:39:20.166508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:20.166515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.928 [2024-11-20 06:39:20.166522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:20.166529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.928 [2024-11-20 06:39:20.166536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:20.166544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:29:10.928 [2024-11-20 06:39:20.166583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b1d70 (9): Bad file descriptor 00:29:10.928 [2024-11-20 06:39:20.169855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:29:10.928 [2024-11-20 06:39:20.193947] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:29:10.928 11187.20 IOPS, 43.70 MiB/s [2024-11-20T05:39:31.207Z] 11460.67 IOPS, 44.77 MiB/s [2024-11-20T05:39:31.207Z] 11668.86 IOPS, 45.58 MiB/s [2024-11-20T05:39:31.207Z] 11829.38 IOPS, 46.21 MiB/s [2024-11-20T05:39:31.207Z] [2024-11-20 06:39:24.523314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.928 [2024-11-20 06:39:24.523721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.928 [2024-11-20 06:39:24.523728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.929 [2024-11-20 06:39:24.523733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.929 [2024-11-20 06:39:24.523745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.929 [2024-11-20 06:39:24.523757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.929 [2024-11-20 06:39:24.523769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.929 [2024-11-20 06:39:24.523781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.929 [2024-11-20 06:39:24.523793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.929 [2024-11-20 06:39:24.523805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.929 [2024-11-20 06:39:24.523816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.929 [2024-11-20 06:39:24.523829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.523988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.523995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.929 [2024-11-20 06:39:24.524141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.929 [2024-11-20 06:39:24.524148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.930 [2024-11-20 06:39:24.524218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.930 [2024-11-20 06:39:24.524230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.930 [2024-11-20 06:39:24.524611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.930 [2024-11-20 06:39:24.524628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.930 [2024-11-20 06:39:24.524634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121432 len:8 PRP1 0x0 PRP2 0x0 00:29:10.930 [2024-11-20 06:39:24.524639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121440 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121448 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121456 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121464 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121472 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121480 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121488 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121496 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121504 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121512 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121520 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121528 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121536 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121544 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121552 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.524946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121560 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.524951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.524957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.524961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.536858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121568 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.536883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.536895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.536901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.536906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121576 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.536911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.536916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.536920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.536926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121584 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.536932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.536937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.536941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.536947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121592 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.536953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.536958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.536962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.536971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121600 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.536976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.536981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.931 [2024-11-20 06:39:24.536985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.931 [2024-11-20 06:39:24.536990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121608 len:8 PRP1 0x0 PRP2 0x0 00:29:10.931 [2024-11-20 06:39:24.536995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.537034] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:10.931 [2024-11-20 06:39:24.537058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.931 [2024-11-20 06:39:24.537065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.931 [2024-11-20 06:39:24.537072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.932 [2024-11-20 06:39:24.537078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.932 [2024-11-20 06:39:24.537084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.932 [2024-11-20 06:39:24.537090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.932 [2024-11-20 06:39:24.537096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.932 [2024-11-20 06:39:24.537101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.932 [2024-11-20 06:39:24.537107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:10.932 [2024-11-20 06:39:24.537140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b1d70 (9): Bad file descriptor 00:29:10.932 [2024-11-20 06:39:24.539568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:10.932 11843.11 IOPS, 46.26 MiB/s [2024-11-20T05:39:31.211Z] [2024-11-20 06:39:24.655257] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:29:10.932 11912.30 IOPS, 46.53 MiB/s [2024-11-20T05:39:31.211Z] 11999.27 IOPS, 46.87 MiB/s [2024-11-20T05:39:31.211Z] 12073.92 IOPS, 47.16 MiB/s [2024-11-20T05:39:31.211Z] 12144.92 IOPS, 47.44 MiB/s [2024-11-20T05:39:31.211Z] 12208.07 IOPS, 47.69 MiB/s 00:29:10.932 Latency(us) 00:29:10.932 [2024-11-20T05:39:31.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.932 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:10.932 Verification LBA range: start 0x0 length 0x4000 00:29:10.932 NVMe0n1 : 15.01 12254.52 47.87 766.92 0.00 9808.74 641.71 23374.51 00:29:10.932 [2024-11-20T05:39:31.211Z] =================================================================================================================== 00:29:10.932 [2024-11-20T05:39:31.211Z] Total : 12254.52 47.87 766.92 0.00 9808.74 641.71 23374.51 00:29:10.932 Received shutdown signal, test time was about 15.000000 seconds 00:29:10.932 00:29:10.932 Latency(us) 00:29:10.932 [2024-11-20T05:39:31.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.932 [2024-11-20T05:39:31.211Z] =================================================================================================================== 00:29:10.932 [2024-11-20T05:39:31.211Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2954162 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2954162 /var/tmp/bdevperf.sock 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2954162 ']' 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:10.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:10.932 06:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:11.503 06:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:11.503 06:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:29:11.503 06:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:11.763 [2024-11-20 06:39:31.827828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:11.763 06:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:11.763 [2024-11-20 06:39:32.004254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:11.763 06:39:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:12.335 NVMe0n1 00:29:12.335 06:39:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:12.596 00:29:12.596 06:39:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:12.857 00:29:12.857 06:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:12.857 06:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:13.117 06:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:13.376 06:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:16.768 06:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:16.768 06:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:16.768 06:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2955185 00:29:16.768 06:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:16.768 06:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2955185 00:29:17.708 { 00:29:17.708 "results": [ 00:29:17.708 { 00:29:17.708 "job": "NVMe0n1", 00:29:17.708 "core_mask": "0x1", 00:29:17.708 "workload": "verify", 00:29:17.708 "status": "finished", 00:29:17.708 "verify_range": { 00:29:17.708 "start": 0, 00:29:17.708 "length": 16384 00:29:17.708 }, 00:29:17.708 "queue_depth": 128, 00:29:17.708 "io_size": 4096, 00:29:17.708 "runtime": 1.012562, 00:29:17.708 "iops": 13144.87409166056, 00:29:17.708 "mibps": 51.34716442054906, 00:29:17.708 "io_failed": 0, 00:29:17.708 "io_timeout": 0, 00:29:17.708 "avg_latency_us": 9704.603664412722, 00:29:17.708 "min_latency_us": 2129.92, 00:29:17.708 "max_latency_us": 8465.066666666668 00:29:17.708 } 00:29:17.708 ], 00:29:17.708 "core_count": 1 00:29:17.708 } 00:29:17.708 06:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:17.708 [2024-11-20 06:39:30.884743] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:29:17.708 [2024-11-20 06:39:30.884856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954162 ] 00:29:17.708 [2024-11-20 06:39:30.970970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.708 [2024-11-20 06:39:31.000097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.708 [2024-11-20 06:39:33.424466] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:17.708 [2024-11-20 06:39:33.424505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.708 [2024-11-20 06:39:33.424515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.708 [2024-11-20 06:39:33.424522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.708 [2024-11-20 06:39:33.424527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.708 [2024-11-20 06:39:33.424533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.708 [2024-11-20 06:39:33.424539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.708 [2024-11-20 06:39:33.424544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.708 [2024-11-20 06:39:33.424550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.708 [2024-11-20 06:39:33.424555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:29:17.708 [2024-11-20 06:39:33.424575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:29:17.708 [2024-11-20 06:39:33.424587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3ad70 (9): Bad file descriptor 00:29:17.708 [2024-11-20 06:39:33.472186] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:29:17.708 Running I/O for 1 seconds... 00:29:17.708 13055.00 IOPS, 51.00 MiB/s 00:29:17.708 Latency(us) 00:29:17.708 [2024-11-20T05:39:37.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.708 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:17.708 Verification LBA range: start 0x0 length 0x4000 00:29:17.708 NVMe0n1 : 1.01 13144.87 51.35 0.00 0.00 9704.60 2129.92 8465.07 00:29:17.708 [2024-11-20T05:39:37.987Z] =================================================================================================================== 00:29:17.708 [2024-11-20T05:39:37.987Z] Total : 13144.87 51.35 0.00 0.00 9704.60 2129.92 8465.07 00:29:17.708 06:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:17.708 06:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:17.708 06:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:17.968 06:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:17.968 06:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:18.228 06:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:18.488 06:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2954162 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2954162 ']' 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2954162 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2954162 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2954162' 00:29:21.793 killing process with pid 2954162 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2954162 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2954162 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:21.793 06:39:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.793 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:21.793 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:21.793 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:21.793 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.793 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:29:21.793 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.793 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:29:21.793 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.793 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.793 rmmod nvme_tcp 00:29:22.054 rmmod nvme_fabrics 00:29:22.054 rmmod nvme_keyring 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2950445 ']' 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2950445 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2950445 ']' 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2950445 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2950445 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2950445' 00:29:22.054 killing process with pid 2950445 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2950445 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2950445 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.054 06:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.609 06:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.609 00:29:24.609 real 0m40.547s 00:29:24.609 user 2m4.368s 00:29:24.609 sys 0m8.926s 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:24.610 ************************************ 00:29:24.610 END TEST nvmf_failover 00:29:24.610 ************************************ 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.610 ************************************ 00:29:24.610 START TEST nvmf_host_discovery 00:29:24.610 ************************************ 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:24.610 * Looking for test storage... 00:29:24.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:24.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.610 --rc genhtml_branch_coverage=1 00:29:24.610 --rc genhtml_function_coverage=1 00:29:24.610 --rc genhtml_legend=1 00:29:24.610 --rc geninfo_all_blocks=1 00:29:24.610 --rc geninfo_unexecuted_blocks=1 00:29:24.610 00:29:24.610 ' 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:24.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.610 --rc genhtml_branch_coverage=1 00:29:24.610 --rc genhtml_function_coverage=1 00:29:24.610 --rc genhtml_legend=1 00:29:24.610 --rc geninfo_all_blocks=1 00:29:24.610 --rc geninfo_unexecuted_blocks=1 00:29:24.610 00:29:24.610 ' 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:24.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.610 --rc genhtml_branch_coverage=1 00:29:24.610 --rc genhtml_function_coverage=1 00:29:24.610 --rc genhtml_legend=1 00:29:24.610 --rc geninfo_all_blocks=1 00:29:24.610 --rc geninfo_unexecuted_blocks=1 00:29:24.610 00:29:24.610 ' 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:24.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.610 --rc genhtml_branch_coverage=1 00:29:24.610 --rc genhtml_function_coverage=1 00:29:24.610 --rc genhtml_legend=1 00:29:24.610 --rc geninfo_all_blocks=1 00:29:24.610 --rc geninfo_unexecuted_blocks=1 00:29:24.610 00:29:24.610 ' 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.610 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:24.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.611 06:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:32.749 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:32.749 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:32.749 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:32.749 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.749 06:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.749 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.749 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.749 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.749 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.749 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.749 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:29:32.749 00:29:32.749 --- 10.0.0.2 ping statistics --- 00:29:32.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.749 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:29:32.750 00:29:32.750 --- 10.0.0.1 ping statistics --- 00:29:32.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.750 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2960532 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2960532 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2960532 ']' 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:32.750 06:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.750 [2024-11-20 06:39:52.246317] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:29:32.750 [2024-11-20 06:39:52.246385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.750 [2024-11-20 06:39:52.344984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.750 [2024-11-20 06:39:52.395661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.750 [2024-11-20 06:39:52.395710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.750 [2024-11-20 06:39:52.395719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.750 [2024-11-20 06:39:52.395727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.750 [2024-11-20 06:39:52.395733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.750 [2024-11-20 06:39:52.396507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.011 [2024-11-20 06:39:53.108863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.011 [2024-11-20 06:39:53.121126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.011 null0 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.011 null1 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2960722 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2960722 /tmp/host.sock 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2960722 ']' 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:33.011 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:33.011 06:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.011 [2024-11-20 06:39:53.219279] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:29:33.011 [2024-11-20 06:39:53.219347] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960722 ] 00:29:33.272 [2024-11-20 06:39:53.312593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.272 [2024-11-20 06:39:53.365947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:33.842 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.103 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:34.103 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:34.103 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.103 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.103 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.104 [2024-11-20 06:39:54.372395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.104 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:34.364 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:29:34.365 06:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:29:34.937 [2024-11-20 06:39:55.098218] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:34.937 [2024-11-20 06:39:55.098251] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:34.937 [2024-11-20 06:39:55.098266] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:34.937 [2024-11-20 06:39:55.186532] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:35.198 [2024-11-20 06:39:55.287580] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:35.198 [2024-11-20 06:39:55.288933] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb077a0:1 started. 00:29:35.198 [2024-11-20 06:39:55.290905] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:35.198 [2024-11-20 06:39:55.290936] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:35.198 [2024-11-20 06:39:55.296854] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb077a0 was disconnected and freed. delete nvme_qpair. 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:35.459 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.720 06:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.981 [2024-11-20 06:39:56.027199] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xad60a0:1 started. 00:29:35.981 [2024-11-20 06:39:56.038314] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xad60a0 was disconnected and freed. delete nvme_qpair. 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.981 [2024-11-20 06:39:56.117320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:35.981 [2024-11-20 06:39:56.117777] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:35.981 [2024-11-20 06:39:56.117798] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:35.981 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:35.982 [2024-11-20 06:39:56.204526] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:35.982 06:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:29:36.243 [2024-11-20 06:39:56.304408] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:29:36.243 [2024-11-20 06:39:56.304446] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:36.243 [2024-11-20 06:39:56.304454] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:36.243 [2024-11-20 06:39:56.304459] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.185 [2024-11-20 06:39:57.365374] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:37.185 [2024-11-20 06:39:57.365392] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:37.185 [2024-11-20 06:39:57.371713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.185 [2024-11-20 06:39:57.371727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.185 [2024-11-20 06:39:57.371734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.185 [2024-11-20 06:39:57.371739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.185 [2024-11-20 06:39:57.371745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.185 [2024-11-20 06:39:57.371750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.185 [2024-11-20 06:39:57.371756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.185 [2024-11-20 06:39:57.371761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.185 [2024-11-20 06:39:57.371767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7e10 is same with the state(6) to be set 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:37.185 [2024-11-20 06:39:57.381728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7e10 (9): Bad file descriptor 00:29:37.185 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.185 [2024-11-20 06:39:57.391762] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:37.185 [2024-11-20 06:39:57.391771] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:37.185 [2024-11-20 06:39:57.391775] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:37.185 [2024-11-20 06:39:57.391778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:37.185 [2024-11-20 06:39:57.391791] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:37.185 [2024-11-20 06:39:57.392066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.185 [2024-11-20 06:39:57.392077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7e10 with addr=10.0.0.2, port=4420 00:29:37.185 [2024-11-20 06:39:57.392082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7e10 is same with the state(6) to be set 00:29:37.185 [2024-11-20 06:39:57.392091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7e10 (9): Bad file descriptor 00:29:37.185 [2024-11-20 06:39:57.392099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:37.185 [2024-11-20 06:39:57.392104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:37.185 [2024-11-20 06:39:57.392110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:37.185 [2024-11-20 06:39:57.392115] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:37.185 [2024-11-20 06:39:57.392119] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:37.185 [2024-11-20 06:39:57.392123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:37.185 [2024-11-20 06:39:57.401820] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:37.185 [2024-11-20 06:39:57.401829] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:37.185 [2024-11-20 06:39:57.401832] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:37.185 [2024-11-20 06:39:57.401835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:37.185 [2024-11-20 06:39:57.401845] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:37.185 [2024-11-20 06:39:57.402120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.185 [2024-11-20 06:39:57.402129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7e10 with addr=10.0.0.2, port=4420 00:29:37.185 [2024-11-20 06:39:57.402138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7e10 is same with the state(6) to be set 00:29:37.185 [2024-11-20 06:39:57.402146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7e10 (9): Bad file descriptor 00:29:37.185 [2024-11-20 06:39:57.402154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:37.185 [2024-11-20 06:39:57.402162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:37.185 [2024-11-20 06:39:57.402168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:37.185 [2024-11-20 06:39:57.402172] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:37.186 [2024-11-20 06:39:57.402175] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:37.186 [2024-11-20 06:39:57.402178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:37.186 [2024-11-20 06:39:57.411874] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:37.186 [2024-11-20 06:39:57.411886] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:37.186 [2024-11-20 06:39:57.411889] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:37.186 [2024-11-20 06:39:57.411892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:37.186 [2024-11-20 06:39:57.411903] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:37.186 [2024-11-20 06:39:57.412704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.186 [2024-11-20 06:39:57.412723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7e10 with addr=10.0.0.2, port=4420 00:29:37.186 [2024-11-20 06:39:57.412730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7e10 is same with the state(6) to be set 00:29:37.186 [2024-11-20 06:39:57.412742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7e10 (9): Bad file descriptor 00:29:37.186 [2024-11-20 06:39:57.412776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.186 [2024-11-20 06:39:57.412783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:37.186 [2024-11-20 06:39:57.412797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:37.186 [2024-11-20 06:39:57.412802] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:37.186 [2024-11-20 06:39:57.412810] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:37.186 [2024-11-20 06:39:57.412815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.186 [2024-11-20 06:39:57.421932] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:37.186 [2024-11-20 06:39:57.421943] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:37.186 [2024-11-20 06:39:57.421947] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:37.186 [2024-11-20 06:39:57.421950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:37.186 [2024-11-20 06:39:57.421961] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:37.186 [2024-11-20 06:39:57.422374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.186 [2024-11-20 06:39:57.422404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7e10 with addr=10.0.0.2, port=4420 00:29:37.186 [2024-11-20 06:39:57.422412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7e10 is same with the state(6) to be set 00:29:37.186 [2024-11-20 06:39:57.422426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7e10 (9): Bad file descriptor 00:29:37.186 [2024-11-20 06:39:57.422445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:37.186 [2024-11-20 06:39:57.422451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:37.186 [2024-11-20 06:39:57.422456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:37.186 [2024-11-20 06:39:57.422461] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:37.186 [2024-11-20 06:39:57.422465] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:37.186 [2024-11-20 06:39:57.422468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:37.186 [2024-11-20 06:39:57.431992] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:37.186 [2024-11-20 06:39:57.432002] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:37.186 [2024-11-20 06:39:57.432005] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:37.186 [2024-11-20 06:39:57.432008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:37.186 [2024-11-20 06:39:57.432020] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:37.186 [2024-11-20 06:39:57.432418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.186 [2024-11-20 06:39:57.432448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7e10 with addr=10.0.0.2, port=4420 00:29:37.186 [2024-11-20 06:39:57.432458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7e10 is same with the state(6) to be set 00:29:37.186 [2024-11-20 06:39:57.432472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7e10 (9): Bad file descriptor 00:29:37.186 [2024-11-20 06:39:57.432492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:37.186 [2024-11-20 06:39:57.432508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:37.186 [2024-11-20 06:39:57.432514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:37.186 [2024-11-20 06:39:57.432519] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:37.186 [2024-11-20 06:39:57.432523] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:37.186 [2024-11-20 06:39:57.432526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:37.186 [2024-11-20 06:39:57.442050] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:37.186 [2024-11-20 06:39:57.442060] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:37.186 [2024-11-20 06:39:57.442064] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:37.186 [2024-11-20 06:39:57.442067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:37.186 [2024-11-20 06:39:57.442078] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:37.186 [2024-11-20 06:39:57.442380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.186 [2024-11-20 06:39:57.442390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7e10 with addr=10.0.0.2, port=4420 00:29:37.186 [2024-11-20 06:39:57.442395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7e10 is same with the state(6) to be set 00:29:37.186 [2024-11-20 06:39:57.442403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7e10 (9): Bad file descriptor 00:29:37.186 [2024-11-20 06:39:57.442410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:37.186 [2024-11-20 06:39:57.442415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:37.186 [2024-11-20 06:39:57.442420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:37.186 [2024-11-20 06:39:57.442424] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:37.186 [2024-11-20 06:39:57.442428] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:37.186 [2024-11-20 06:39:57.442431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:37.186 [2024-11-20 06:39:57.452107] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:37.186 [2024-11-20 06:39:57.452115] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:37.186 [2024-11-20 06:39:57.452118] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:37.186 [2024-11-20 06:39:57.452121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:37.186 [2024-11-20 06:39:57.452131] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:37.186 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.186 [2024-11-20 06:39:57.452331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.186 [2024-11-20 06:39:57.452340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7e10 with addr=10.0.0.2, port=4420 00:29:37.186 [2024-11-20 06:39:57.452345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7e10 is same with the state(6) to be set 00:29:37.186 [2024-11-20 06:39:57.452355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7e10 (9): Bad file descriptor 00:29:37.186 [2024-11-20 06:39:57.452363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:37.186 [2024-11-20 06:39:57.452368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:37.186 [2024-11-20 06:39:57.452373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:37.186 [2024-11-20 06:39:57.452377] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:37.186 [2024-11-20 06:39:57.452380] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:37.187 [2024-11-20 06:39:57.452384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:37.187 [2024-11-20 06:39:57.452548] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:37.187 [2024-11-20 06:39:57.452560] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:37.187 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.448 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:37.449 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.710 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:37.710 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:37.710 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:29:37.710 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:29:37.710 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:37.710 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.710 06:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.650 [2024-11-20 06:39:58.798348] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:38.650 [2024-11-20 06:39:58.798361] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:38.650 [2024-11-20 06:39:58.798370] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:38.650 [2024-11-20 06:39:58.886628] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:38.909 [2024-11-20 06:39:59.154936] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:29:38.909 [2024-11-20 06:39:59.155619] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xaebbd0:1 started. 00:29:38.909 [2024-11-20 06:39:59.156994] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:38.909 [2024-11-20 06:39:59.157015] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.909 [2024-11-20 06:39:59.166095] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xaebbd0 was disconnected and freed. delete nvme_qpair. 00:29:38.909 request: 00:29:38.909 { 00:29:38.909 "name": "nvme", 00:29:38.909 "trtype": "tcp", 00:29:38.909 "traddr": "10.0.0.2", 00:29:38.909 "adrfam": "ipv4", 00:29:38.909 "trsvcid": "8009", 00:29:38.909 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:38.909 "wait_for_attach": true, 00:29:38.909 "method": "bdev_nvme_start_discovery", 00:29:38.909 "req_id": 1 00:29:38.909 } 00:29:38.909 Got JSON-RPC error response 00:29:38.909 response: 00:29:38.909 { 00:29:38.909 "code": -17, 00:29:38.909 "message": "File exists" 00:29:38.909 } 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.909 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:39.169 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.170 request: 00:29:39.170 { 00:29:39.170 "name": "nvme_second", 00:29:39.170 "trtype": "tcp", 00:29:39.170 "traddr": "10.0.0.2", 00:29:39.170 "adrfam": "ipv4", 00:29:39.170 "trsvcid": "8009", 00:29:39.170 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:39.170 "wait_for_attach": true, 00:29:39.170 "method": "bdev_nvme_start_discovery", 00:29:39.170 "req_id": 1 00:29:39.170 } 00:29:39.170 Got JSON-RPC error response 00:29:39.170 response: 00:29:39.170 { 00:29:39.170 "code": -17, 00:29:39.170 "message": "File exists" 00:29:39.170 } 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.170 06:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.553 [2024-11-20 06:40:00.413574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.553 [2024-11-20 06:40:00.413609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3e590 with addr=10.0.0.2, port=8010 00:29:40.553 [2024-11-20 06:40:00.413623] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:40.553 [2024-11-20 06:40:00.413629] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:40.553 [2024-11-20 06:40:00.413635] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:41.492 [2024-11-20 06:40:01.416010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.492 [2024-11-20 06:40:01.416030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3e590 with addr=10.0.0.2, port=8010 00:29:41.492 [2024-11-20 06:40:01.416039] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:41.492 [2024-11-20 06:40:01.416045] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:41.492 [2024-11-20 06:40:01.416049] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:42.435 [2024-11-20 06:40:02.418000] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:42.435 request: 00:29:42.435 { 00:29:42.435 "name": "nvme_second", 00:29:42.435 "trtype": "tcp", 00:29:42.435 "traddr": "10.0.0.2", 00:29:42.435 "adrfam": "ipv4", 00:29:42.435 "trsvcid": "8010", 00:29:42.435 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:42.435 "wait_for_attach": false, 00:29:42.435 "attach_timeout_ms": 3000, 00:29:42.435 "method": "bdev_nvme_start_discovery", 00:29:42.435 "req_id": 1 00:29:42.435 } 00:29:42.435 Got JSON-RPC error response 00:29:42.435 response: 00:29:42.435 { 00:29:42.435 "code": -110, 00:29:42.435 "message": "Connection timed out" 00:29:42.435 } 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2960722 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.435 rmmod nvme_tcp 00:29:42.435 rmmod nvme_fabrics 00:29:42.435 rmmod nvme_keyring 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2960532 ']' 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2960532 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 2960532 ']' 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 2960532 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2960532 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2960532' 00:29:42.435 killing process with pid 2960532 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 2960532 00:29:42.435 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 2960532 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.697 06:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.608 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.608 00:29:44.608 real 0m20.325s 00:29:44.608 user 0m23.585s 00:29:44.608 sys 0m7.207s 00:29:44.608 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:44.608 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.608 ************************************ 00:29:44.608 END TEST nvmf_host_discovery 00:29:44.608 ************************************ 00:29:44.608 06:40:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:44.608 06:40:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:44.608 06:40:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:44.608 06:40:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.608 ************************************ 00:29:44.608 START TEST nvmf_host_multipath_status 00:29:44.608 ************************************ 00:29:44.608 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:44.869 * Looking for test storage... 00:29:44.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.869 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:44.869 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:29:44.869 06:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:44.869 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:44.869 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.869 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.869 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.869 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.869 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.869 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:44.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.870 --rc genhtml_branch_coverage=1 00:29:44.870 --rc genhtml_function_coverage=1 00:29:44.870 --rc genhtml_legend=1 00:29:44.870 --rc geninfo_all_blocks=1 00:29:44.870 --rc geninfo_unexecuted_blocks=1 00:29:44.870 00:29:44.870 ' 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:44.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.870 --rc genhtml_branch_coverage=1 00:29:44.870 --rc genhtml_function_coverage=1 00:29:44.870 --rc genhtml_legend=1 00:29:44.870 --rc geninfo_all_blocks=1 00:29:44.870 --rc geninfo_unexecuted_blocks=1 00:29:44.870 00:29:44.870 ' 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:44.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.870 --rc genhtml_branch_coverage=1 00:29:44.870 --rc genhtml_function_coverage=1 00:29:44.870 --rc genhtml_legend=1 00:29:44.870 --rc geninfo_all_blocks=1 00:29:44.870 --rc geninfo_unexecuted_blocks=1 00:29:44.870 00:29:44.870 ' 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:44.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.870 --rc genhtml_branch_coverage=1 00:29:44.870 --rc genhtml_function_coverage=1 00:29:44.870 --rc genhtml_legend=1 00:29:44.870 --rc geninfo_all_blocks=1 00:29:44.870 --rc geninfo_unexecuted_blocks=1 00:29:44.870 00:29:44.870 ' 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:44.870 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.871 06:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:53.050 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:53.051 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:53.051 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:53.051 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:53.051 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:29:53.051 00:29:53.051 --- 10.0.0.2 ping statistics --- 00:29:53.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.051 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:29:53.051 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:29:53.051 00:29:53.052 --- 10.0.0.1 ping statistics --- 00:29:53.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.052 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2966819 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2966819 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2966819 ']' 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:53.052 06:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:53.052 [2024-11-20 06:40:12.682604] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:29:53.052 [2024-11-20 06:40:12.682673] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.052 [2024-11-20 06:40:12.783213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:53.052 [2024-11-20 06:40:12.836397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.052 [2024-11-20 06:40:12.836448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.052 [2024-11-20 06:40:12.836456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.052 [2024-11-20 06:40:12.836464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.052 [2024-11-20 06:40:12.836470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.052 [2024-11-20 06:40:12.838242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.052 [2024-11-20 06:40:12.838269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.313 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:53.313 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:29:53.313 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.313 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:53.313 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:53.313 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.313 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2966819 00:29:53.313 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:53.574 [2024-11-20 06:40:13.715059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.574 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:53.835 Malloc0 00:29:53.835 06:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:54.097 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:54.097 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.357 [2024-11-20 06:40:14.533241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.357 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:54.619 [2024-11-20 06:40:14.733844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:54.619 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2967321 00:29:54.619 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:54.619 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:54.619 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2967321 /var/tmp/bdevperf.sock 00:29:54.619 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2967321 ']' 00:29:54.619 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:54.619 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:54.619 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:54.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:54.619 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:54.619 06:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:55.561 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:55.561 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:29:55.561 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:55.822 06:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:56.082 Nvme0n1 00:29:56.082 06:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:56.343 Nvme0n1 00:29:56.343 06:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:29:56.343 06:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:58.888 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:29:58.888 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:58.888 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:58.888 06:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:29:59.830 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:29:59.830 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:59.830 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:59.830 06:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:00.090 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.090 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:00.090 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.090 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:00.090 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:00.090 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:00.090 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.090 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:00.351 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.351 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:00.351 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.351 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:00.613 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.613 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:00.613 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.613 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:00.874 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.874 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:00.874 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.874 06:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:00.874 06:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.874 06:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:00.874 06:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:01.135 06:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:01.395 06:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:02.420 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:02.420 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:02.420 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:02.420 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:02.420 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:02.420 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:02.420 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:02.420 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:02.680 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:02.680 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:02.680 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:02.680 06:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:02.940 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:02.940 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:02.940 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:02.940 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:03.200 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:03.200 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:03.200 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.200 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:03.200 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:03.200 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:03.200 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.200 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:03.464 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:03.464 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:03.464 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:03.724 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:03.724 06:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:05.108 06:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:05.108 06:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:05.108 06:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.108 06:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:05.108 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:05.108 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:05.108 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.108 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:05.108 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:05.108 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:05.108 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.108 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:05.369 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:05.369 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:05.369 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.369 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:05.630 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:05.630 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:05.630 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.630 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:05.630 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:05.630 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:05.630 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.630 06:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:05.890 06:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:05.890 06:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:05.890 06:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:06.151 06:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:06.151 06:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:07.535 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:07.535 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:07.535 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:07.536 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:07.536 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:07.536 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:07.536 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:07.536 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:07.536 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:07.536 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:07.536 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:07.536 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:07.797 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:07.797 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:07.797 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:07.797 06:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:08.057 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.057 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:08.057 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.057 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:08.319 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.319 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:08.319 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.319 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:08.319 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:08.319 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:08.319 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:08.579 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:08.840 06:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:09.780 06:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:09.780 06:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:09.780 06:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.780 06:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:10.041 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:10.041 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:10.041 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.041 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:10.041 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:10.041 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:10.041 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.041 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:10.301 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.301 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:10.301 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.302 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:10.561 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.561 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:10.561 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.561 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:10.821 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:10.821 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:10.821 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.821 06:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:10.821 06:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:10.821 06:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:10.821 06:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:11.081 06:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:11.341 06:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:12.283 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:12.283 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:12.283 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.283 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:12.544 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:12.544 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:12.544 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.544 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:12.544 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:12.544 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:12.544 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.544 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:12.805 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:12.805 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:12.805 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.805 06:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:13.067 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.067 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:13.067 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.067 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:13.067 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:13.067 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:13.067 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.067 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:13.327 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.327 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:13.587 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:13.587 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:13.587 06:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:13.846 06:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:14.786 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:14.786 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:14.786 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.786 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:15.046 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.046 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:15.046 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.046 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:15.307 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.307 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:15.307 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.307 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:15.568 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.568 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:15.568 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:15.568 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.568 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.568 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:15.568 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.568 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:15.829 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.829 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:15.829 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.829 06:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:16.090 06:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.090 06:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:16.090 06:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:16.090 06:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:16.351 06:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:17.294 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:17.294 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:17.294 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.294 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:17.554 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:17.554 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:17.554 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.554 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:17.814 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.814 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:17.814 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.814 06:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:17.814 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.814 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:17.814 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.814 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:18.074 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.074 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:18.074 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.074 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:18.353 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.353 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:18.353 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.353 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:18.613 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.613 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:18.613 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:18.613 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:18.872 06:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:19.811 06:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:19.811 06:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:19.811 06:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.812 06:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:20.072 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.072 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:20.072 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.072 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:20.333 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.333 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:20.333 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.333 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:20.333 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.333 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:20.333 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.333 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:20.593 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.593 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:20.593 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.593 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:20.853 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.853 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:20.853 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.853 06:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:21.112 06:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.112 06:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:21.112 06:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:21.112 06:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:21.372 06:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:22.313 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:22.313 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:22.314 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.314 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:22.574 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.575 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:22.575 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.575 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:22.835 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:22.835 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:22.835 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.835 06:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:22.835 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.835 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:22.835 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.835 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:23.096 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.096 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:23.096 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:23.096 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2967321 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2967321 ']' 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2967321 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:23.357 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2967321 00:30:23.643 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:30:23.643 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:30:23.643 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2967321' 00:30:23.643 killing process with pid 2967321 00:30:23.643 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2967321 00:30:23.643 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2967321 00:30:23.643 { 00:30:23.643 "results": [ 00:30:23.643 { 00:30:23.643 "job": "Nvme0n1", 00:30:23.643 "core_mask": "0x4", 00:30:23.643 "workload": "verify", 00:30:23.643 "status": "terminated", 00:30:23.643 "verify_range": { 00:30:23.643 "start": 0, 00:30:23.643 "length": 16384 00:30:23.643 }, 00:30:23.643 "queue_depth": 128, 00:30:23.643 "io_size": 4096, 00:30:23.643 "runtime": 26.956974, 00:30:23.643 "iops": 12391.41307180843, 00:30:23.643 "mibps": 48.40395731175168, 00:30:23.643 "io_failed": 0, 00:30:23.643 "io_timeout": 0, 00:30:23.643 "avg_latency_us": 10310.661517984641, 00:30:23.643 "min_latency_us": 788.48, 00:30:23.643 "max_latency_us": 3089803.946666667 00:30:23.643 } 00:30:23.643 ], 00:30:23.643 "core_count": 1 00:30:23.643 } 00:30:23.643 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2967321 00:30:23.643 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:23.643 [2024-11-20 06:40:14.821255] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:30:23.643 [2024-11-20 06:40:14.821334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2967321 ] 00:30:23.643 [2024-11-20 06:40:14.914798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.643 [2024-11-20 06:40:14.966355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.643 Running I/O for 90 seconds... 00:30:23.643 11445.00 IOPS, 44.71 MiB/s [2024-11-20T05:40:43.922Z] 11749.50 IOPS, 45.90 MiB/s [2024-11-20T05:40:43.922Z] 11853.00 IOPS, 46.30 MiB/s [2024-11-20T05:40:43.922Z] 12162.75 IOPS, 47.51 MiB/s [2024-11-20T05:40:43.922Z] 12358.20 IOPS, 48.27 MiB/s [2024-11-20T05:40:43.922Z] 12439.83 IOPS, 48.59 MiB/s [2024-11-20T05:40:43.922Z] 12503.43 IOPS, 48.84 MiB/s [2024-11-20T05:40:43.922Z] 12593.75 IOPS, 49.19 MiB/s [2024-11-20T05:40:43.922Z] 12632.44 IOPS, 49.35 MiB/s [2024-11-20T05:40:43.922Z] 12700.40 IOPS, 49.61 MiB/s [2024-11-20T05:40:43.922Z] 12742.64 IOPS, 49.78 MiB/s [2024-11-20T05:40:43.922Z] [2024-11-20 06:40:28.707910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.707946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.707964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.707970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.707981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.707986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.707997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.643 [2024-11-20 06:40:28.708182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.643 [2024-11-20 06:40:28.708188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.644 [2024-11-20 06:40:28.708746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.644 [2024-11-20 06:40:28.708762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.644 [2024-11-20 06:40:28.708777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.644 [2024-11-20 06:40:28.708793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.644 [2024-11-20 06:40:28.708808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.644 [2024-11-20 06:40:28.708824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.644 [2024-11-20 06:40:28.708834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.644 [2024-11-20 06:40:28.708839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.708849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.708855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.708865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.708870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.708880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.708885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.708897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.708902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.708913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.708918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.708928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.708933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.708943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.708948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.708959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.645 [2024-11-20 06:40:28.708964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.708974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.645 [2024-11-20 06:40:28.708980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.708990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.645 [2024-11-20 06:40:28.708996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.645 [2024-11-20 06:40:28.709011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.645 [2024-11-20 06:40:28.709027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.645 [2024-11-20 06:40:28.709679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:23.645 [2024-11-20 06:40:28.709690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.709992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.709998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.710008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.710013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.710024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.710029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.710039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.710045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.710055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.710060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.710071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.710077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.710088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.710093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.710103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.646 [2024-11-20 06:40:28.710109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:23.646 [2024-11-20 06:40:28.710119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.710985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.710991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.711002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.711007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.711017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.711023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.711033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.647 [2024-11-20 06:40:28.711039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:23.647 [2024-11-20 06:40:28.711049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.648 [2024-11-20 06:40:28.711055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.648 [2024-11-20 06:40:28.711070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.648 [2024-11-20 06:40:28.711085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.648 [2024-11-20 06:40:28.711483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.648 [2024-11-20 06:40:28.711489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.711663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.711673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.649 [2024-11-20 06:40:28.722493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.649 [2024-11-20 06:40:28.722550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.649 [2024-11-20 06:40:28.722572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.649 [2024-11-20 06:40:28.722594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.649 [2024-11-20 06:40:28.722620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.649 [2024-11-20 06:40:28.722641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.649 [2024-11-20 06:40:28.722662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.649 [2024-11-20 06:40:28.722683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.649 [2024-11-20 06:40:28.722704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.649 [2024-11-20 06:40:28.722725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.649 [2024-11-20 06:40:28.722746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.649 [2024-11-20 06:40:28.722760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.722767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.722780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.722788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.722803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.722810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.722824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.650 [2024-11-20 06:40:28.722830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.722845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.650 [2024-11-20 06:40:28.722852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.722866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.650 [2024-11-20 06:40:28.722873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.722888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.650 [2024-11-20 06:40:28.722896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.650 [2024-11-20 06:40:28.725096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.650 [2024-11-20 06:40:28.725526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:23.650 [2024-11-20 06:40:28.725540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.725978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.725985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.726000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.726007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.726020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.726027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.726041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.726048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.726062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.726069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.726082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.726089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:23.651 [2024-11-20 06:40:28.726105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.651 [2024-11-20 06:40:28.726112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.726440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.726447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.652 12771.25 IOPS, 49.89 MiB/s [2024-11-20T05:40:43.931Z] [2024-11-20 06:40:28.727103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.727115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.727137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.727162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.727183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.727204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.727225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.727245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.727269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.652 [2024-11-20 06:40:28.727289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.652 [2024-11-20 06:40:28.727311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.652 [2024-11-20 06:40:28.727332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.652 [2024-11-20 06:40:28.727346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.653 [2024-11-20 06:40:28.727910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.653 [2024-11-20 06:40:28.727917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.727930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.727937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.727951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.727958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.727971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.727978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.727992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.727999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.728020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.728042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.728063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.728339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.728360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.728380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.728401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.728422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.728436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.654 [2024-11-20 06:40:28.728443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.729022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.729033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.729048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.729055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.654 [2024-11-20 06:40:28.729069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.654 [2024-11-20 06:40:28.729076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.655 [2024-11-20 06:40:28.729559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:23.655 [2024-11-20 06:40:28.729573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.729943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.729951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.737054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.737078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.737093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.737100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.737114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.737121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.737134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.737141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.737154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.737166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.737179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.737186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.737200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.737206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.737220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.656 [2024-11-20 06:40:28.737226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:23.656 [2024-11-20 06:40:28.737240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.737259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.737284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.737304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.737324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.737345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.737364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.737384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.737404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.737424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.737445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.737451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.738137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.738169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.738193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.738213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.738233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.738253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.738273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.738293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.738313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.657 [2024-11-20 06:40:28.738333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.657 [2024-11-20 06:40:28.738353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.657 [2024-11-20 06:40:28.738374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.657 [2024-11-20 06:40:28.738393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.657 [2024-11-20 06:40:28.738413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.657 [2024-11-20 06:40:28.738435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.657 [2024-11-20 06:40:28.738458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.657 [2024-11-20 06:40:28.738478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.657 [2024-11-20 06:40:28.738491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.738980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.738994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.739001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.739014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.739021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.658 [2024-11-20 06:40:28.739034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.658 [2024-11-20 06:40:28.739041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.659 [2024-11-20 06:40:28.739061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.659 [2024-11-20 06:40:28.739081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.739345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.659 [2024-11-20 06:40:28.739366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.659 [2024-11-20 06:40:28.739386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.659 [2024-11-20 06:40:28.739406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.659 [2024-11-20 06:40:28.739426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.739441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.659 [2024-11-20 06:40:28.739448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.659 [2024-11-20 06:40:28.740299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:23.659 [2024-11-20 06:40:28.740316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.740988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.740994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.741009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.741016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.741030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.741036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.741050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.741057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.741070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.741079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:23.660 [2024-11-20 06:40:28.741092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.660 [2024-11-20 06:40:28.741099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.741114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.741120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.741134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.741142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.741155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.741165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.741178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.741185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.741200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.741208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.741221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.741227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.741243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.741252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.741273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.741283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.741302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.741312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.661 [2024-11-20 06:40:28.742477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.661 [2024-11-20 06:40:28.742505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.661 [2024-11-20 06:40:28.742538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.661 [2024-11-20 06:40:28.742566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.661 [2024-11-20 06:40:28.742595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.661 [2024-11-20 06:40:28.742614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.661 [2024-11-20 06:40:28.742624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.742989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.742998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.662 [2024-11-20 06:40:28.743420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.662 [2024-11-20 06:40:28.743429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.663 [2024-11-20 06:40:28.743458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.663 [2024-11-20 06:40:28.743486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.663 [2024-11-20 06:40:28.743515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.663 [2024-11-20 06:40:28.743545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.743911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.663 [2024-11-20 06:40:28.743939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.663 [2024-11-20 06:40:28.743968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.743987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.663 [2024-11-20 06:40:28.743996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.744015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.663 [2024-11-20 06:40:28.744024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.744043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.663 [2024-11-20 06:40:28.744053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.744872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.744887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.744907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.744917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.744936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.744946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.744964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.744974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.744992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.745002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:23.663 [2024-11-20 06:40:28.745021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.663 [2024-11-20 06:40:28.745033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:23.664 [2024-11-20 06:40:28.745824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.664 [2024-11-20 06:40:28.745834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.745852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.745862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.745881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.745890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.745909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.745918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.745937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.745947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.745965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.745975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.745993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.746594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.746604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:23.665 [2024-11-20 06:40:28.747429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.665 [2024-11-20 06:40:28.747444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.666 [2024-11-20 06:40:28.747765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.747794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.747822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.747851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.747879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.747908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.747936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.747966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.747985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.747995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.748013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.748023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.748042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.748051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.748070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.748080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.748099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.748108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.748127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.748136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.748155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.748170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.748189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.748198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.748217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.748227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.666 [2024-11-20 06:40:28.748246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.666 [2024-11-20 06:40:28.748256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.667 [2024-11-20 06:40:28.748828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.667 [2024-11-20 06:40:28.748856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.667 [2024-11-20 06:40:28.748885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.667 [2024-11-20 06:40:28.748903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.748913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.748932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.748941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.748960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.748969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.748988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.748999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.749027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.749056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.749084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.749112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.749141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.749173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.749201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.668 [2024-11-20 06:40:28.749229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.668 [2024-11-20 06:40:28.749258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.668 [2024-11-20 06:40:28.749286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.668 [2024-11-20 06:40:28.749314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.749333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.668 [2024-11-20 06:40:28.749343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:23.668 [2024-11-20 06:40:28.750515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.668 [2024-11-20 06:40:28.750524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.750979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.750998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.669 [2024-11-20 06:40:28.751314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.669 [2024-11-20 06:40:28.751323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.751880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.751889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.752598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.752608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.752624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.752631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.752644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.752650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.752663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.752670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.752683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.752689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.752702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.752709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.752721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.752728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.670 [2024-11-20 06:40:28.752740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.670 [2024-11-20 06:40:28.752747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.671 [2024-11-20 06:40:28.752767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.671 [2024-11-20 06:40:28.752786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.671 [2024-11-20 06:40:28.752806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.671 [2024-11-20 06:40:28.752825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.671 [2024-11-20 06:40:28.752844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.752865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.752886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.752905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.752926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.752945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.752967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.752981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.752988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.671 [2024-11-20 06:40:28.753307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.671 [2024-11-20 06:40:28.753314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.672 [2024-11-20 06:40:28.753848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.672 [2024-11-20 06:40:28.753861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.672 [2024-11-20 06:40:28.753868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.753880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.673 [2024-11-20 06:40:28.753887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.753900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.673 [2024-11-20 06:40:28.753907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.753919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.673 [2024-11-20 06:40:28.753926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.753939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.673 [2024-11-20 06:40:28.753945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.673 [2024-11-20 06:40:28.754892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:23.673 [2024-11-20 06:40:28.754906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.754913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.754925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.754932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.754944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.754951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.754963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.754970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.754983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.754989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:23.674 [2024-11-20 06:40:28.755431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.674 [2024-11-20 06:40:28.755438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.759638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.759644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.675 [2024-11-20 06:40:28.760534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.675 [2024-11-20 06:40:28.760554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.675 [2024-11-20 06:40:28.760567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.675 [2024-11-20 06:40:28.760574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.760981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.760988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.761001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.761007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.761020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.761027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.761040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.761046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.761059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.761065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.761078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.761085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.761098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.761104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.676 [2024-11-20 06:40:28.761117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.676 [2024-11-20 06:40:28.761124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.761522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.761612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.677 [2024-11-20 06:40:28.761619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.762195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.762205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.762221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.762228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.677 [2024-11-20 06:40:28.762241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.677 [2024-11-20 06:40:28.762248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.678 [2024-11-20 06:40:28.762774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:23.678 [2024-11-20 06:40:28.762787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.762806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.762826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.762845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.762865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.762884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.762904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.762925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.762945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.762965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.762984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.762991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:23.679 [2024-11-20 06:40:28.763224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.679 [2024-11-20 06:40:28.763230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.763983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.763990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.764008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.764026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.764045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.764064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.764083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.764101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.680 [2024-11-20 06:40:28.764120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.680 [2024-11-20 06:40:28.764138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.680 [2024-11-20 06:40:28.764163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.680 [2024-11-20 06:40:28.764183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.680 [2024-11-20 06:40:28.764202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.680 [2024-11-20 06:40:28.764220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.680 [2024-11-20 06:40:28.764240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.680 [2024-11-20 06:40:28.764259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.680 [2024-11-20 06:40:28.764271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.680 [2024-11-20 06:40:28.764277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.681 [2024-11-20 06:40:28.764785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.681 [2024-11-20 06:40:28.764797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.682 [2024-11-20 06:40:28.764803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.764816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.682 [2024-11-20 06:40:28.764822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.764834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.764841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.764854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.764861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.764873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.764880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.764892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.764898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.764910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.764916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.764929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.764935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.764948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.764954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.764966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.764972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.764985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.764991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.682 [2024-11-20 06:40:28.765085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.682 [2024-11-20 06:40:28.765104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.682 [2024-11-20 06:40:28.765124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.682 [2024-11-20 06:40:28.765143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.682 [2024-11-20 06:40:28.765719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:23.682 [2024-11-20 06:40:28.765864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.682 [2024-11-20 06:40:28.765872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.765884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.765891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.765903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.765909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.765921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.765928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.765940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.765947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.765959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.765965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.765978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.765985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.765997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.683 [2024-11-20 06:40:28.766387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:23.683 [2024-11-20 06:40:28.766400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.766794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.766802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.767321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.767331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.767345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.767351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.767364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.767370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.767382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.767389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.767401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.767407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.767419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.684 [2024-11-20 06:40:28.767426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:23.684 [2024-11-20 06:40:28.767438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.685 [2024-11-20 06:40:28.767444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.685 [2024-11-20 06:40:28.767463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.685 [2024-11-20 06:40:28.767481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.685 [2024-11-20 06:40:28.767500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.685 [2024-11-20 06:40:28.767518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.685 [2024-11-20 06:40:28.767537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.685 [2024-11-20 06:40:28.767558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.685 [2024-11-20 06:40:28.767576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.685 [2024-11-20 06:40:28.767595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.685 [2024-11-20 06:40:28.767613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.685 [2024-11-20 06:40:28.767947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.685 [2024-11-20 06:40:28.767954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.767967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.767973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.767985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.767992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.686 [2024-11-20 06:40:28.768319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.686 [2024-11-20 06:40:28.768337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.686 [2024-11-20 06:40:28.768356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.686 [2024-11-20 06:40:28.768374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.686 [2024-11-20 06:40:28.768393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.686 [2024-11-20 06:40:28.768411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.686 [2024-11-20 06:40:28.768429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.686 [2024-11-20 06:40:28.768448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.686 [2024-11-20 06:40:28.768467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.686 [2024-11-20 06:40:28.768480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.686 [2024-11-20 06:40:28.768487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.768499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.768505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.768518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.768524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.768536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.768542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.768554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.768561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.768573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.687 [2024-11-20 06:40:28.768579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.768591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.687 [2024-11-20 06:40:28.768598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.768610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.687 [2024-11-20 06:40:28.768616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.687 [2024-11-20 06:40:28.769184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.687 [2024-11-20 06:40:28.769204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.687 [2024-11-20 06:40:28.769581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:23.687 [2024-11-20 06:40:28.769593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.769988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.769994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:23.688 [2024-11-20 06:40:28.770203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.688 [2024-11-20 06:40:28.770209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.689 [2024-11-20 06:40:28.770824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.770846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.770870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.770893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.770915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.770937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.770961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.770978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.770984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.771000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.771007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.771023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.771029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.771045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.771052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.771068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.771076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.771091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.689 [2024-11-20 06:40:28.771098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:23.689 [2024-11-20 06:40:28.771115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.771121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.690 [2024-11-20 06:40:28.775627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.690 [2024-11-20 06:40:28.775648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.690 [2024-11-20 06:40:28.775668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.690 [2024-11-20 06:40:28.775691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.690 [2024-11-20 06:40:28.775711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.690 [2024-11-20 06:40:28.775732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.690 [2024-11-20 06:40:28.775752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.690 [2024-11-20 06:40:28.775774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.690 [2024-11-20 06:40:28.775789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.690 [2024-11-20 06:40:28.775794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:28.775809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.691 [2024-11-20 06:40:28.775815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:28.775830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.691 [2024-11-20 06:40:28.775835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:28.775850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.691 [2024-11-20 06:40:28.775856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:28.775871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.691 [2024-11-20 06:40:28.775877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:28.775892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.691 [2024-11-20 06:40:28.775897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:28.775913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:28.775918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:28.775933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:28.775940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:28.775956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:28.775961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.691 11788.85 IOPS, 46.05 MiB/s [2024-11-20T05:40:43.970Z] 10946.79 IOPS, 42.76 MiB/s [2024-11-20T05:40:43.970Z] 10217.00 IOPS, 39.91 MiB/s [2024-11-20T05:40:43.970Z] 10348.12 IOPS, 40.42 MiB/s [2024-11-20T05:40:43.970Z] 10501.12 IOPS, 41.02 MiB/s [2024-11-20T05:40:43.970Z] 10885.67 IOPS, 42.52 MiB/s [2024-11-20T05:40:43.970Z] 11264.21 IOPS, 44.00 MiB/s [2024-11-20T05:40:43.970Z] 11549.75 IOPS, 45.12 MiB/s [2024-11-20T05:40:43.970Z] 11610.57 IOPS, 45.35 MiB/s [2024-11-20T05:40:43.970Z] 11663.05 IOPS, 45.56 MiB/s [2024-11-20T05:40:43.970Z] 11884.13 IOPS, 46.42 MiB/s [2024-11-20T05:40:43.970Z] 12146.92 IOPS, 47.45 MiB/s [2024-11-20T05:40:43.970Z] [2024-11-20 06:40:41.466049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.691 [2024-11-20 06:40:41.466089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:23.691 [2024-11-20 06:40:41.466352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.691 [2024-11-20 06:40:41.466357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.467630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.467646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.467659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.467664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.467675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.467680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.467690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.467696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.467706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.467711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.467722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.467730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.467740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.467746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.467756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.467761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.467771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.692 [2024-11-20 06:40:41.467777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.467788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.692 [2024-11-20 06:40:41.467793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.469177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.692 [2024-11-20 06:40:41.469191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.469203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.692 [2024-11-20 06:40:41.469209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.469219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.469225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.469235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.469240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.469251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.469256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.469266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.469272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.469283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.469288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:23.692 [2024-11-20 06:40:41.469299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.692 [2024-11-20 06:40:41.469304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:23.692 12327.64 IOPS, 48.15 MiB/s [2024-11-20T05:40:43.971Z] 12366.46 IOPS, 48.31 MiB/s [2024-11-20T05:40:43.971Z] Received shutdown signal, test time was about 26.957586 seconds 00:30:23.692 00:30:23.692 Latency(us) 00:30:23.692 [2024-11-20T05:40:43.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.692 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:23.692 Verification LBA range: start 0x0 length 0x4000 00:30:23.692 Nvme0n1 : 26.96 12391.41 48.40 0.00 0.00 10310.66 788.48 3089803.95 00:30:23.692 [2024-11-20T05:40:43.971Z] =================================================================================================================== 00:30:23.692 [2024-11-20T05:40:43.971Z] Total : 12391.41 48.40 0.00 0.00 10310.66 788.48 3089803.95 00:30:23.692 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.953 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:23.953 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:23.953 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:23.953 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.953 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:30:23.953 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.953 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:30:23.953 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.953 06:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.953 rmmod nvme_tcp 00:30:23.953 rmmod nvme_fabrics 00:30:23.953 rmmod nvme_keyring 00:30:23.953 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.953 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:30:23.953 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:30:23.953 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2966819 ']' 00:30:23.953 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2966819 00:30:23.954 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2966819 ']' 00:30:23.954 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2966819 00:30:23.954 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:30:23.954 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:23.954 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2966819 00:30:23.954 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:23.954 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:23.954 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2966819' 00:30:23.954 killing process with pid 2966819 00:30:23.954 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2966819 00:30:23.954 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2966819 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.126 06:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.126 00:30:26.126 real 0m41.456s 00:30:26.126 user 1m46.773s 00:30:26.126 sys 0m11.865s 00:30:26.126 06:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:26.126 06:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:26.126 ************************************ 00:30:26.126 END TEST nvmf_host_multipath_status 00:30:26.126 ************************************ 00:30:26.126 06:40:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:26.126 06:40:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:26.126 06:40:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:26.126 06:40:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.387 ************************************ 00:30:26.387 START TEST nvmf_discovery_remove_ifc 00:30:26.387 ************************************ 00:30:26.387 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:26.387 * Looking for test storage... 00:30:26.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:26.387 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:26.387 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:26.387 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:30:26.387 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:26.387 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.387 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.387 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:26.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.388 --rc genhtml_branch_coverage=1 00:30:26.388 --rc genhtml_function_coverage=1 00:30:26.388 --rc genhtml_legend=1 00:30:26.388 --rc geninfo_all_blocks=1 00:30:26.388 --rc geninfo_unexecuted_blocks=1 00:30:26.388 00:30:26.388 ' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:26.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.388 --rc genhtml_branch_coverage=1 00:30:26.388 --rc genhtml_function_coverage=1 00:30:26.388 --rc genhtml_legend=1 00:30:26.388 --rc geninfo_all_blocks=1 00:30:26.388 --rc geninfo_unexecuted_blocks=1 00:30:26.388 00:30:26.388 ' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:26.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.388 --rc genhtml_branch_coverage=1 00:30:26.388 --rc genhtml_function_coverage=1 00:30:26.388 --rc genhtml_legend=1 00:30:26.388 --rc geninfo_all_blocks=1 00:30:26.388 --rc geninfo_unexecuted_blocks=1 00:30:26.388 00:30:26.388 ' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:26.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.388 --rc genhtml_branch_coverage=1 00:30:26.388 --rc genhtml_function_coverage=1 00:30:26.388 --rc genhtml_legend=1 00:30:26.388 --rc geninfo_all_blocks=1 00:30:26.388 --rc geninfo_unexecuted_blocks=1 00:30:26.388 00:30:26.388 ' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:26.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:26.388 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:26.389 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:26.389 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:26.389 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:26.389 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.389 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.389 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.389 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.389 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.389 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.389 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.649 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.649 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.649 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.649 06:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:34.787 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:34.787 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:34.787 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:34.787 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.787 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.788 06:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:30:34.788 00:30:34.788 --- 10.0.0.2 ping statistics --- 00:30:34.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.788 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:30:34.788 00:30:34.788 --- 10.0.0.1 ping statistics --- 00:30:34.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.788 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2977307 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2977307 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2977307 ']' 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:34.788 06:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:34.788 [2024-11-20 06:40:54.247314] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:30:34.788 [2024-11-20 06:40:54.247381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.788 [2024-11-20 06:40:54.345736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.788 [2024-11-20 06:40:54.396267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.788 [2024-11-20 06:40:54.396316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.788 [2024-11-20 06:40:54.396324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.788 [2024-11-20 06:40:54.396332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.788 [2024-11-20 06:40:54.396338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.788 [2024-11-20 06:40:54.397107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.788 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:34.788 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:30:34.788 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:34.788 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:34.788 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:35.050 [2024-11-20 06:40:55.116397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.050 [2024-11-20 06:40:55.124654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:35.050 null0 00:30:35.050 [2024-11-20 06:40:55.156614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2977516 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2977516 /tmp/host.sock 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2977516 ']' 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:35.050 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:35.050 06:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:35.050 [2024-11-20 06:40:55.233217] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:30:35.050 [2024-11-20 06:40:55.233284] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2977516 ] 00:30:35.050 [2024-11-20 06:40:55.326267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.310 [2024-11-20 06:40:55.379367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.882 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:35.882 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:30:35.882 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:35.882 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:35.882 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.882 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:35.883 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.883 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:35.883 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.883 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:35.883 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.883 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:35.883 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.883 06:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:37.267 [2024-11-20 06:40:57.216132] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:37.267 [2024-11-20 06:40:57.216153] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:37.267 [2024-11-20 06:40:57.216170] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:37.267 [2024-11-20 06:40:57.343575] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:37.267 [2024-11-20 06:40:57.524695] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:37.267 [2024-11-20 06:40:57.525832] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x758410:1 started. 00:30:37.267 [2024-11-20 06:40:57.527397] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:37.267 [2024-11-20 06:40:57.527444] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:37.267 [2024-11-20 06:40:57.527463] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:37.267 [2024-11-20 06:40:57.527477] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:37.267 [2024-11-20 06:40:57.527498] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:37.267 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.267 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:37.267 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:37.267 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.267 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:37.267 [2024-11-20 06:40:57.534630] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x758410 was disconnected and freed. delete nvme_qpair. 00:30:37.267 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.267 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:37.267 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:37.267 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:37.527 06:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:38.911 06:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:38.911 06:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:38.911 06:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:38.911 06:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.911 06:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:38.911 06:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:38.911 06:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:38.911 06:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.911 06:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:38.911 06:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:39.853 06:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:39.853 06:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:39.853 06:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:39.853 06:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.853 06:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:39.853 06:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:39.853 06:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:39.853 06:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.853 06:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:39.853 06:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:40.794 06:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:40.794 06:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:40.794 06:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:40.794 06:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.794 06:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:40.794 06:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:40.794 06:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:40.794 06:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.794 06:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:40.794 06:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:41.734 06:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:41.734 06:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:41.734 06:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:41.734 06:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.734 06:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:41.734 06:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:41.734 06:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:41.734 06:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.734 06:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:41.734 06:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:43.119 [2024-11-20 06:41:02.967900] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:43.119 [2024-11-20 06:41:02.967937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.119 [2024-11-20 06:41:02.967946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.119 [2024-11-20 06:41:02.967954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.119 [2024-11-20 06:41:02.967959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.119 [2024-11-20 06:41:02.967965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.119 [2024-11-20 06:41:02.967970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.119 [2024-11-20 06:41:02.967976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.119 [2024-11-20 06:41:02.967985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.119 [2024-11-20 06:41:02.967991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.119 [2024-11-20 06:41:02.967996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.119 [2024-11-20 06:41:02.968002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x734c00 is same with the state(6) to be set 00:30:43.119 06:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:43.119 06:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:43.119 06:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:43.119 06:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.119 06:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:43.119 06:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.119 06:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:43.119 [2024-11-20 06:41:02.977922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x734c00 (9): Bad file descriptor 00:30:43.119 [2024-11-20 06:41:02.987958] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:43.119 [2024-11-20 06:41:02.987969] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:43.119 [2024-11-20 06:41:02.987972] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:43.119 [2024-11-20 06:41:02.987976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:43.119 [2024-11-20 06:41:02.987993] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:43.119 06:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.120 06:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:43.120 06:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:44.064 06:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:44.064 06:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:44.064 06:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:44.064 06:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.064 06:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:44.064 06:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:44.064 06:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:44.064 [2024-11-20 06:41:04.029523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:44.064 [2024-11-20 06:41:04.029596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x734c00 with addr=10.0.0.2, port=4420 00:30:44.064 [2024-11-20 06:41:04.029627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x734c00 is same with the state(6) to be set 00:30:44.064 [2024-11-20 06:41:04.029681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x734c00 (9): Bad file descriptor 00:30:44.065 [2024-11-20 06:41:04.029793] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:30:44.065 [2024-11-20 06:41:04.029862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:44.065 [2024-11-20 06:41:04.029885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:44.065 [2024-11-20 06:41:04.029909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:44.065 [2024-11-20 06:41:04.029930] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:44.065 [2024-11-20 06:41:04.029947] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:44.065 [2024-11-20 06:41:04.029964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:44.065 [2024-11-20 06:41:04.029991] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:44.065 [2024-11-20 06:41:04.030009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:44.065 06:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.065 06:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:44.065 06:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:45.006 [2024-11-20 06:41:05.032419] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:45.006 [2024-11-20 06:41:05.032435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:45.006 [2024-11-20 06:41:05.032444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:45.006 [2024-11-20 06:41:05.032449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:45.006 [2024-11-20 06:41:05.032455] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:30:45.006 [2024-11-20 06:41:05.032460] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:45.006 [2024-11-20 06:41:05.032463] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:45.006 [2024-11-20 06:41:05.032466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:45.006 [2024-11-20 06:41:05.032484] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:45.006 [2024-11-20 06:41:05.032500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.006 [2024-11-20 06:41:05.032507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.006 [2024-11-20 06:41:05.032513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.006 [2024-11-20 06:41:05.032519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.006 [2024-11-20 06:41:05.032525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.006 [2024-11-20 06:41:05.032530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.006 [2024-11-20 06:41:05.032535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.006 [2024-11-20 06:41:05.032541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.006 [2024-11-20 06:41:05.032546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.006 [2024-11-20 06:41:05.032555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.006 [2024-11-20 06:41:05.032560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:30:45.006 [2024-11-20 06:41:05.032709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x724340 (9): Bad file descriptor 00:30:45.006 [2024-11-20 06:41:05.033720] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:45.006 [2024-11-20 06:41:05.033727] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:45.006 06:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:46.390 06:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:46.390 06:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.390 06:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:46.390 06:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.390 06:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:46.390 06:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:46.390 06:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:46.390 06:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.390 06:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:46.390 06:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:46.960 [2024-11-20 06:41:07.090124] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:46.960 [2024-11-20 06:41:07.090138] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:46.960 [2024-11-20 06:41:07.090148] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:46.960 [2024-11-20 06:41:07.219528] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:47.221 06:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:47.221 06:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.221 06:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:47.221 06:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.221 06:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:47.221 06:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:47.221 06:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:47.221 06:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.221 06:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:47.221 06:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:47.221 [2024-11-20 06:41:07.401601] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:30:47.221 [2024-11-20 06:41:07.402297] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x729260:1 started. 00:30:47.221 [2024-11-20 06:41:07.403192] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:47.221 [2024-11-20 06:41:07.403219] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:47.221 [2024-11-20 06:41:07.403233] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:47.221 [2024-11-20 06:41:07.403244] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:47.221 [2024-11-20 06:41:07.403250] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:47.221 [2024-11-20 06:41:07.408881] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x729260 was disconnected and freed. delete nvme_qpair. 00:30:48.162 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:48.162 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.162 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:48.162 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.162 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:48.162 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:48.162 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:48.162 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2977516 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2977516 ']' 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2977516 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2977516 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2977516' 00:30:48.423 killing process with pid 2977516 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2977516 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2977516 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:48.423 rmmod nvme_tcp 00:30:48.423 rmmod nvme_fabrics 00:30:48.423 rmmod nvme_keyring 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2977307 ']' 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2977307 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2977307 ']' 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2977307 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:48.423 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2977307 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2977307' 00:30:48.684 killing process with pid 2977307 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2977307 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2977307 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.684 06:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.315 06:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:51.315 00:30:51.315 real 0m24.502s 00:30:51.315 user 0m29.641s 00:30:51.315 sys 0m7.220s 00:30:51.315 06:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:51.315 06:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.315 ************************************ 00:30:51.315 END TEST nvmf_discovery_remove_ifc 00:30:51.315 ************************************ 00:30:51.315 06:41:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:51.315 06:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:51.315 06:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:51.315 06:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.315 ************************************ 00:30:51.315 START TEST nvmf_identify_kernel_target 00:30:51.315 ************************************ 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:51.315 * Looking for test storage... 00:30:51.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.315 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:51.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.315 --rc genhtml_branch_coverage=1 00:30:51.316 --rc genhtml_function_coverage=1 00:30:51.316 --rc genhtml_legend=1 00:30:51.316 --rc geninfo_all_blocks=1 00:30:51.316 --rc geninfo_unexecuted_blocks=1 00:30:51.316 00:30:51.316 ' 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:51.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.316 --rc genhtml_branch_coverage=1 00:30:51.316 --rc genhtml_function_coverage=1 00:30:51.316 --rc genhtml_legend=1 00:30:51.316 --rc geninfo_all_blocks=1 00:30:51.316 --rc geninfo_unexecuted_blocks=1 00:30:51.316 00:30:51.316 ' 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:51.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.316 --rc genhtml_branch_coverage=1 00:30:51.316 --rc genhtml_function_coverage=1 00:30:51.316 --rc genhtml_legend=1 00:30:51.316 --rc geninfo_all_blocks=1 00:30:51.316 --rc geninfo_unexecuted_blocks=1 00:30:51.316 00:30:51.316 ' 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:51.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.316 --rc genhtml_branch_coverage=1 00:30:51.316 --rc genhtml_function_coverage=1 00:30:51.316 --rc genhtml_legend=1 00:30:51.316 --rc geninfo_all_blocks=1 00:30:51.316 --rc geninfo_unexecuted_blocks=1 00:30:51.316 00:30:51.316 ' 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:51.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:51.316 06:41:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:59.503 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.503 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.503 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.503 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.503 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.503 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.503 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.503 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:59.504 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:59.504 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:59.504 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:59.504 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:30:59.504 00:30:59.504 --- 10.0.0.2 ping statistics --- 00:30:59.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.504 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:30:59.504 00:30:59.504 --- 10.0.0.1 ping statistics --- 00:30:59.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.504 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:30:59.504 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:59.505 06:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:02.052 Waiting for block devices as requested 00:31:02.052 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:02.052 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:02.314 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:02.314 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:02.314 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:02.576 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:02.576 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:02.576 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:02.837 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:02.837 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:03.097 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:03.097 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:03.097 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:03.357 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:03.357 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:03.358 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:03.617 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:03.879 06:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:03.879 06:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:03.879 06:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:03.879 06:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:03.879 06:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:03.879 06:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:03.879 06:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:03.879 06:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:03.879 06:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:03.879 No valid GPT data, bailing 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:03.879 00:31:03.879 Discovery Log Number of Records 2, Generation counter 2 00:31:03.879 =====Discovery Log Entry 0====== 00:31:03.879 trtype: tcp 00:31:03.879 adrfam: ipv4 00:31:03.879 subtype: current discovery subsystem 00:31:03.879 treq: not specified, sq flow control disable supported 00:31:03.879 portid: 1 00:31:03.879 trsvcid: 4420 00:31:03.879 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:03.879 traddr: 10.0.0.1 00:31:03.879 eflags: none 00:31:03.879 sectype: none 00:31:03.879 =====Discovery Log Entry 1====== 00:31:03.879 trtype: tcp 00:31:03.879 adrfam: ipv4 00:31:03.879 subtype: nvme subsystem 00:31:03.879 treq: not specified, sq flow control disable supported 00:31:03.879 portid: 1 00:31:03.879 trsvcid: 4420 00:31:03.879 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:03.879 traddr: 10.0.0.1 00:31:03.879 eflags: none 00:31:03.879 sectype: none 00:31:03.879 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:03.879 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:04.141 ===================================================== 00:31:04.141 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:04.141 ===================================================== 00:31:04.141 Controller Capabilities/Features 00:31:04.141 ================================ 00:31:04.141 Vendor ID: 0000 00:31:04.141 Subsystem Vendor ID: 0000 00:31:04.141 Serial Number: 385270619d0790e75239 00:31:04.141 Model Number: Linux 00:31:04.141 Firmware Version: 6.8.9-20 00:31:04.141 Recommended Arb Burst: 0 00:31:04.141 IEEE OUI Identifier: 00 00 00 00:31:04.141 Multi-path I/O 00:31:04.141 May have multiple subsystem ports: No 00:31:04.141 May have multiple controllers: No 00:31:04.141 Associated with SR-IOV VF: No 00:31:04.141 Max Data Transfer Size: Unlimited 00:31:04.141 Max Number of Namespaces: 0 00:31:04.141 Max Number of I/O Queues: 1024 00:31:04.141 NVMe Specification Version (VS): 1.3 00:31:04.141 NVMe Specification Version (Identify): 1.3 00:31:04.141 Maximum Queue Entries: 1024 00:31:04.141 Contiguous Queues Required: No 00:31:04.141 Arbitration Mechanisms Supported 00:31:04.141 Weighted Round Robin: Not Supported 00:31:04.141 Vendor Specific: Not Supported 00:31:04.141 Reset Timeout: 7500 ms 00:31:04.141 Doorbell Stride: 4 bytes 00:31:04.141 NVM Subsystem Reset: Not Supported 00:31:04.141 Command Sets Supported 00:31:04.141 NVM Command Set: Supported 00:31:04.141 Boot Partition: Not Supported 00:31:04.141 Memory Page Size Minimum: 4096 bytes 00:31:04.141 Memory Page Size Maximum: 4096 bytes 00:31:04.141 Persistent Memory Region: Not Supported 00:31:04.141 Optional Asynchronous Events Supported 00:31:04.141 Namespace Attribute Notices: Not Supported 00:31:04.141 Firmware Activation Notices: Not Supported 00:31:04.141 ANA Change Notices: Not Supported 00:31:04.141 PLE Aggregate Log Change Notices: Not Supported 00:31:04.141 LBA Status Info Alert Notices: Not Supported 00:31:04.141 EGE Aggregate Log Change Notices: Not Supported 00:31:04.141 Normal NVM Subsystem Shutdown event: Not Supported 00:31:04.141 Zone Descriptor Change Notices: Not Supported 00:31:04.141 Discovery Log Change Notices: Supported 00:31:04.141 Controller Attributes 00:31:04.141 128-bit Host Identifier: Not Supported 00:31:04.141 Non-Operational Permissive Mode: Not Supported 00:31:04.141 NVM Sets: Not Supported 00:31:04.141 Read Recovery Levels: Not Supported 00:31:04.141 Endurance Groups: Not Supported 00:31:04.141 Predictable Latency Mode: Not Supported 00:31:04.141 Traffic Based Keep ALive: Not Supported 00:31:04.141 Namespace Granularity: Not Supported 00:31:04.141 SQ Associations: Not Supported 00:31:04.141 UUID List: Not Supported 00:31:04.141 Multi-Domain Subsystem: Not Supported 00:31:04.141 Fixed Capacity Management: Not Supported 00:31:04.141 Variable Capacity Management: Not Supported 00:31:04.141 Delete Endurance Group: Not Supported 00:31:04.141 Delete NVM Set: Not Supported 00:31:04.141 Extended LBA Formats Supported: Not Supported 00:31:04.141 Flexible Data Placement Supported: Not Supported 00:31:04.141 00:31:04.141 Controller Memory Buffer Support 00:31:04.141 ================================ 00:31:04.141 Supported: No 00:31:04.141 00:31:04.141 Persistent Memory Region Support 00:31:04.141 ================================ 00:31:04.141 Supported: No 00:31:04.141 00:31:04.141 Admin Command Set Attributes 00:31:04.141 ============================ 00:31:04.141 Security Send/Receive: Not Supported 00:31:04.141 Format NVM: Not Supported 00:31:04.141 Firmware Activate/Download: Not Supported 00:31:04.141 Namespace Management: Not Supported 00:31:04.141 Device Self-Test: Not Supported 00:31:04.141 Directives: Not Supported 00:31:04.141 NVMe-MI: Not Supported 00:31:04.141 Virtualization Management: Not Supported 00:31:04.141 Doorbell Buffer Config: Not Supported 00:31:04.141 Get LBA Status Capability: Not Supported 00:31:04.141 Command & Feature Lockdown Capability: Not Supported 00:31:04.141 Abort Command Limit: 1 00:31:04.142 Async Event Request Limit: 1 00:31:04.142 Number of Firmware Slots: N/A 00:31:04.142 Firmware Slot 1 Read-Only: N/A 00:31:04.142 Firmware Activation Without Reset: N/A 00:31:04.142 Multiple Update Detection Support: N/A 00:31:04.142 Firmware Update Granularity: No Information Provided 00:31:04.142 Per-Namespace SMART Log: No 00:31:04.142 Asymmetric Namespace Access Log Page: Not Supported 00:31:04.142 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:04.142 Command Effects Log Page: Not Supported 00:31:04.142 Get Log Page Extended Data: Supported 00:31:04.142 Telemetry Log Pages: Not Supported 00:31:04.142 Persistent Event Log Pages: Not Supported 00:31:04.142 Supported Log Pages Log Page: May Support 00:31:04.142 Commands Supported & Effects Log Page: Not Supported 00:31:04.142 Feature Identifiers & Effects Log Page:May Support 00:31:04.142 NVMe-MI Commands & Effects Log Page: May Support 00:31:04.142 Data Area 4 for Telemetry Log: Not Supported 00:31:04.142 Error Log Page Entries Supported: 1 00:31:04.142 Keep Alive: Not Supported 00:31:04.142 00:31:04.142 NVM Command Set Attributes 00:31:04.142 ========================== 00:31:04.142 Submission Queue Entry Size 00:31:04.142 Max: 1 00:31:04.142 Min: 1 00:31:04.142 Completion Queue Entry Size 00:31:04.142 Max: 1 00:31:04.142 Min: 1 00:31:04.142 Number of Namespaces: 0 00:31:04.142 Compare Command: Not Supported 00:31:04.142 Write Uncorrectable Command: Not Supported 00:31:04.142 Dataset Management Command: Not Supported 00:31:04.142 Write Zeroes Command: Not Supported 00:31:04.142 Set Features Save Field: Not Supported 00:31:04.142 Reservations: Not Supported 00:31:04.142 Timestamp: Not Supported 00:31:04.142 Copy: Not Supported 00:31:04.142 Volatile Write Cache: Not Present 00:31:04.142 Atomic Write Unit (Normal): 1 00:31:04.142 Atomic Write Unit (PFail): 1 00:31:04.142 Atomic Compare & Write Unit: 1 00:31:04.142 Fused Compare & Write: Not Supported 00:31:04.142 Scatter-Gather List 00:31:04.142 SGL Command Set: Supported 00:31:04.142 SGL Keyed: Not Supported 00:31:04.142 SGL Bit Bucket Descriptor: Not Supported 00:31:04.142 SGL Metadata Pointer: Not Supported 00:31:04.142 Oversized SGL: Not Supported 00:31:04.142 SGL Metadata Address: Not Supported 00:31:04.142 SGL Offset: Supported 00:31:04.142 Transport SGL Data Block: Not Supported 00:31:04.142 Replay Protected Memory Block: Not Supported 00:31:04.142 00:31:04.142 Firmware Slot Information 00:31:04.142 ========================= 00:31:04.142 Active slot: 0 00:31:04.142 00:31:04.142 00:31:04.142 Error Log 00:31:04.142 ========= 00:31:04.142 00:31:04.142 Active Namespaces 00:31:04.142 ================= 00:31:04.142 Discovery Log Page 00:31:04.142 ================== 00:31:04.142 Generation Counter: 2 00:31:04.142 Number of Records: 2 00:31:04.142 Record Format: 0 00:31:04.142 00:31:04.142 Discovery Log Entry 0 00:31:04.142 ---------------------- 00:31:04.142 Transport Type: 3 (TCP) 00:31:04.142 Address Family: 1 (IPv4) 00:31:04.142 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:04.142 Entry Flags: 00:31:04.142 Duplicate Returned Information: 0 00:31:04.142 Explicit Persistent Connection Support for Discovery: 0 00:31:04.142 Transport Requirements: 00:31:04.142 Secure Channel: Not Specified 00:31:04.142 Port ID: 1 (0x0001) 00:31:04.142 Controller ID: 65535 (0xffff) 00:31:04.142 Admin Max SQ Size: 32 00:31:04.142 Transport Service Identifier: 4420 00:31:04.142 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:04.142 Transport Address: 10.0.0.1 00:31:04.142 Discovery Log Entry 1 00:31:04.142 ---------------------- 00:31:04.142 Transport Type: 3 (TCP) 00:31:04.142 Address Family: 1 (IPv4) 00:31:04.142 Subsystem Type: 2 (NVM Subsystem) 00:31:04.142 Entry Flags: 00:31:04.142 Duplicate Returned Information: 0 00:31:04.142 Explicit Persistent Connection Support for Discovery: 0 00:31:04.142 Transport Requirements: 00:31:04.142 Secure Channel: Not Specified 00:31:04.142 Port ID: 1 (0x0001) 00:31:04.142 Controller ID: 65535 (0xffff) 00:31:04.142 Admin Max SQ Size: 32 00:31:04.142 Transport Service Identifier: 4420 00:31:04.142 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:04.142 Transport Address: 10.0.0.1 00:31:04.142 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:04.142 get_feature(0x01) failed 00:31:04.142 get_feature(0x02) failed 00:31:04.142 get_feature(0x04) failed 00:31:04.142 ===================================================== 00:31:04.142 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:04.142 ===================================================== 00:31:04.142 Controller Capabilities/Features 00:31:04.142 ================================ 00:31:04.142 Vendor ID: 0000 00:31:04.142 Subsystem Vendor ID: 0000 00:31:04.142 Serial Number: cf9c1ae19d5d8afb6851 00:31:04.142 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:04.142 Firmware Version: 6.8.9-20 00:31:04.142 Recommended Arb Burst: 6 00:31:04.142 IEEE OUI Identifier: 00 00 00 00:31:04.142 Multi-path I/O 00:31:04.142 May have multiple subsystem ports: Yes 00:31:04.142 May have multiple controllers: Yes 00:31:04.142 Associated with SR-IOV VF: No 00:31:04.142 Max Data Transfer Size: Unlimited 00:31:04.142 Max Number of Namespaces: 1024 00:31:04.142 Max Number of I/O Queues: 128 00:31:04.142 NVMe Specification Version (VS): 1.3 00:31:04.142 NVMe Specification Version (Identify): 1.3 00:31:04.142 Maximum Queue Entries: 1024 00:31:04.142 Contiguous Queues Required: No 00:31:04.142 Arbitration Mechanisms Supported 00:31:04.142 Weighted Round Robin: Not Supported 00:31:04.142 Vendor Specific: Not Supported 00:31:04.142 Reset Timeout: 7500 ms 00:31:04.142 Doorbell Stride: 4 bytes 00:31:04.142 NVM Subsystem Reset: Not Supported 00:31:04.142 Command Sets Supported 00:31:04.142 NVM Command Set: Supported 00:31:04.142 Boot Partition: Not Supported 00:31:04.142 Memory Page Size Minimum: 4096 bytes 00:31:04.142 Memory Page Size Maximum: 4096 bytes 00:31:04.142 Persistent Memory Region: Not Supported 00:31:04.142 Optional Asynchronous Events Supported 00:31:04.142 Namespace Attribute Notices: Supported 00:31:04.142 Firmware Activation Notices: Not Supported 00:31:04.142 ANA Change Notices: Supported 00:31:04.142 PLE Aggregate Log Change Notices: Not Supported 00:31:04.142 LBA Status Info Alert Notices: Not Supported 00:31:04.142 EGE Aggregate Log Change Notices: Not Supported 00:31:04.142 Normal NVM Subsystem Shutdown event: Not Supported 00:31:04.142 Zone Descriptor Change Notices: Not Supported 00:31:04.142 Discovery Log Change Notices: Not Supported 00:31:04.142 Controller Attributes 00:31:04.142 128-bit Host Identifier: Supported 00:31:04.142 Non-Operational Permissive Mode: Not Supported 00:31:04.142 NVM Sets: Not Supported 00:31:04.142 Read Recovery Levels: Not Supported 00:31:04.142 Endurance Groups: Not Supported 00:31:04.142 Predictable Latency Mode: Not Supported 00:31:04.142 Traffic Based Keep ALive: Supported 00:31:04.142 Namespace Granularity: Not Supported 00:31:04.142 SQ Associations: Not Supported 00:31:04.142 UUID List: Not Supported 00:31:04.142 Multi-Domain Subsystem: Not Supported 00:31:04.142 Fixed Capacity Management: Not Supported 00:31:04.142 Variable Capacity Management: Not Supported 00:31:04.142 Delete Endurance Group: Not Supported 00:31:04.142 Delete NVM Set: Not Supported 00:31:04.142 Extended LBA Formats Supported: Not Supported 00:31:04.142 Flexible Data Placement Supported: Not Supported 00:31:04.142 00:31:04.142 Controller Memory Buffer Support 00:31:04.142 ================================ 00:31:04.142 Supported: No 00:31:04.142 00:31:04.142 Persistent Memory Region Support 00:31:04.142 ================================ 00:31:04.142 Supported: No 00:31:04.142 00:31:04.142 Admin Command Set Attributes 00:31:04.142 ============================ 00:31:04.142 Security Send/Receive: Not Supported 00:31:04.142 Format NVM: Not Supported 00:31:04.142 Firmware Activate/Download: Not Supported 00:31:04.142 Namespace Management: Not Supported 00:31:04.142 Device Self-Test: Not Supported 00:31:04.142 Directives: Not Supported 00:31:04.142 NVMe-MI: Not Supported 00:31:04.142 Virtualization Management: Not Supported 00:31:04.142 Doorbell Buffer Config: Not Supported 00:31:04.142 Get LBA Status Capability: Not Supported 00:31:04.142 Command & Feature Lockdown Capability: Not Supported 00:31:04.143 Abort Command Limit: 4 00:31:04.143 Async Event Request Limit: 4 00:31:04.143 Number of Firmware Slots: N/A 00:31:04.143 Firmware Slot 1 Read-Only: N/A 00:31:04.143 Firmware Activation Without Reset: N/A 00:31:04.143 Multiple Update Detection Support: N/A 00:31:04.143 Firmware Update Granularity: No Information Provided 00:31:04.143 Per-Namespace SMART Log: Yes 00:31:04.143 Asymmetric Namespace Access Log Page: Supported 00:31:04.143 ANA Transition Time : 10 sec 00:31:04.143 00:31:04.143 Asymmetric Namespace Access Capabilities 00:31:04.143 ANA Optimized State : Supported 00:31:04.143 ANA Non-Optimized State : Supported 00:31:04.143 ANA Inaccessible State : Supported 00:31:04.143 ANA Persistent Loss State : Supported 00:31:04.143 ANA Change State : Supported 00:31:04.143 ANAGRPID is not changed : No 00:31:04.143 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:04.143 00:31:04.143 ANA Group Identifier Maximum : 128 00:31:04.143 Number of ANA Group Identifiers : 128 00:31:04.143 Max Number of Allowed Namespaces : 1024 00:31:04.143 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:04.143 Command Effects Log Page: Supported 00:31:04.143 Get Log Page Extended Data: Supported 00:31:04.143 Telemetry Log Pages: Not Supported 00:31:04.143 Persistent Event Log Pages: Not Supported 00:31:04.143 Supported Log Pages Log Page: May Support 00:31:04.143 Commands Supported & Effects Log Page: Not Supported 00:31:04.143 Feature Identifiers & Effects Log Page:May Support 00:31:04.143 NVMe-MI Commands & Effects Log Page: May Support 00:31:04.143 Data Area 4 for Telemetry Log: Not Supported 00:31:04.143 Error Log Page Entries Supported: 128 00:31:04.143 Keep Alive: Supported 00:31:04.143 Keep Alive Granularity: 1000 ms 00:31:04.143 00:31:04.143 NVM Command Set Attributes 00:31:04.143 ========================== 00:31:04.143 Submission Queue Entry Size 00:31:04.143 Max: 64 00:31:04.143 Min: 64 00:31:04.143 Completion Queue Entry Size 00:31:04.143 Max: 16 00:31:04.143 Min: 16 00:31:04.143 Number of Namespaces: 1024 00:31:04.143 Compare Command: Not Supported 00:31:04.143 Write Uncorrectable Command: Not Supported 00:31:04.143 Dataset Management Command: Supported 00:31:04.143 Write Zeroes Command: Supported 00:31:04.143 Set Features Save Field: Not Supported 00:31:04.143 Reservations: Not Supported 00:31:04.143 Timestamp: Not Supported 00:31:04.143 Copy: Not Supported 00:31:04.143 Volatile Write Cache: Present 00:31:04.143 Atomic Write Unit (Normal): 1 00:31:04.143 Atomic Write Unit (PFail): 1 00:31:04.143 Atomic Compare & Write Unit: 1 00:31:04.143 Fused Compare & Write: Not Supported 00:31:04.143 Scatter-Gather List 00:31:04.143 SGL Command Set: Supported 00:31:04.143 SGL Keyed: Not Supported 00:31:04.143 SGL Bit Bucket Descriptor: Not Supported 00:31:04.143 SGL Metadata Pointer: Not Supported 00:31:04.143 Oversized SGL: Not Supported 00:31:04.143 SGL Metadata Address: Not Supported 00:31:04.143 SGL Offset: Supported 00:31:04.143 Transport SGL Data Block: Not Supported 00:31:04.143 Replay Protected Memory Block: Not Supported 00:31:04.143 00:31:04.143 Firmware Slot Information 00:31:04.143 ========================= 00:31:04.143 Active slot: 0 00:31:04.143 00:31:04.143 Asymmetric Namespace Access 00:31:04.143 =========================== 00:31:04.143 Change Count : 0 00:31:04.143 Number of ANA Group Descriptors : 1 00:31:04.143 ANA Group Descriptor : 0 00:31:04.143 ANA Group ID : 1 00:31:04.143 Number of NSID Values : 1 00:31:04.143 Change Count : 0 00:31:04.143 ANA State : 1 00:31:04.143 Namespace Identifier : 1 00:31:04.143 00:31:04.143 Commands Supported and Effects 00:31:04.143 ============================== 00:31:04.143 Admin Commands 00:31:04.143 -------------- 00:31:04.143 Get Log Page (02h): Supported 00:31:04.143 Identify (06h): Supported 00:31:04.143 Abort (08h): Supported 00:31:04.143 Set Features (09h): Supported 00:31:04.143 Get Features (0Ah): Supported 00:31:04.143 Asynchronous Event Request (0Ch): Supported 00:31:04.143 Keep Alive (18h): Supported 00:31:04.143 I/O Commands 00:31:04.143 ------------ 00:31:04.143 Flush (00h): Supported 00:31:04.143 Write (01h): Supported LBA-Change 00:31:04.143 Read (02h): Supported 00:31:04.143 Write Zeroes (08h): Supported LBA-Change 00:31:04.143 Dataset Management (09h): Supported 00:31:04.143 00:31:04.143 Error Log 00:31:04.143 ========= 00:31:04.143 Entry: 0 00:31:04.143 Error Count: 0x3 00:31:04.143 Submission Queue Id: 0x0 00:31:04.143 Command Id: 0x5 00:31:04.143 Phase Bit: 0 00:31:04.143 Status Code: 0x2 00:31:04.143 Status Code Type: 0x0 00:31:04.143 Do Not Retry: 1 00:31:04.143 Error Location: 0x28 00:31:04.143 LBA: 0x0 00:31:04.143 Namespace: 0x0 00:31:04.143 Vendor Log Page: 0x0 00:31:04.143 ----------- 00:31:04.143 Entry: 1 00:31:04.143 Error Count: 0x2 00:31:04.143 Submission Queue Id: 0x0 00:31:04.143 Command Id: 0x5 00:31:04.143 Phase Bit: 0 00:31:04.143 Status Code: 0x2 00:31:04.143 Status Code Type: 0x0 00:31:04.143 Do Not Retry: 1 00:31:04.143 Error Location: 0x28 00:31:04.143 LBA: 0x0 00:31:04.143 Namespace: 0x0 00:31:04.143 Vendor Log Page: 0x0 00:31:04.143 ----------- 00:31:04.143 Entry: 2 00:31:04.143 Error Count: 0x1 00:31:04.143 Submission Queue Id: 0x0 00:31:04.143 Command Id: 0x4 00:31:04.143 Phase Bit: 0 00:31:04.143 Status Code: 0x2 00:31:04.143 Status Code Type: 0x0 00:31:04.143 Do Not Retry: 1 00:31:04.143 Error Location: 0x28 00:31:04.143 LBA: 0x0 00:31:04.143 Namespace: 0x0 00:31:04.143 Vendor Log Page: 0x0 00:31:04.143 00:31:04.143 Number of Queues 00:31:04.143 ================ 00:31:04.143 Number of I/O Submission Queues: 128 00:31:04.143 Number of I/O Completion Queues: 128 00:31:04.143 00:31:04.143 ZNS Specific Controller Data 00:31:04.143 ============================ 00:31:04.143 Zone Append Size Limit: 0 00:31:04.143 00:31:04.143 00:31:04.143 Active Namespaces 00:31:04.143 ================= 00:31:04.143 get_feature(0x05) failed 00:31:04.143 Namespace ID:1 00:31:04.143 Command Set Identifier: NVM (00h) 00:31:04.143 Deallocate: Supported 00:31:04.143 Deallocated/Unwritten Error: Not Supported 00:31:04.143 Deallocated Read Value: Unknown 00:31:04.143 Deallocate in Write Zeroes: Not Supported 00:31:04.143 Deallocated Guard Field: 0xFFFF 00:31:04.143 Flush: Supported 00:31:04.143 Reservation: Not Supported 00:31:04.143 Namespace Sharing Capabilities: Multiple Controllers 00:31:04.143 Size (in LBAs): 3750748848 (1788GiB) 00:31:04.143 Capacity (in LBAs): 3750748848 (1788GiB) 00:31:04.143 Utilization (in LBAs): 3750748848 (1788GiB) 00:31:04.143 UUID: 6877f05e-d240-412f-8d8a-6b830bbb68b7 00:31:04.143 Thin Provisioning: Not Supported 00:31:04.143 Per-NS Atomic Units: Yes 00:31:04.143 Atomic Write Unit (Normal): 8 00:31:04.143 Atomic Write Unit (PFail): 8 00:31:04.143 Preferred Write Granularity: 8 00:31:04.143 Atomic Compare & Write Unit: 8 00:31:04.143 Atomic Boundary Size (Normal): 0 00:31:04.143 Atomic Boundary Size (PFail): 0 00:31:04.143 Atomic Boundary Offset: 0 00:31:04.143 NGUID/EUI64 Never Reused: No 00:31:04.143 ANA group ID: 1 00:31:04.143 Namespace Write Protected: No 00:31:04.143 Number of LBA Formats: 1 00:31:04.143 Current LBA Format: LBA Format #00 00:31:04.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:04.143 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:04.143 rmmod nvme_tcp 00:31:04.143 rmmod nvme_fabrics 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.143 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.144 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.144 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:31:04.144 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:31:04.144 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.144 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.144 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.144 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.144 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.144 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.144 06:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:06.687 06:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:10.061 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:10.061 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:10.322 00:31:10.322 real 0m19.558s 00:31:10.322 user 0m5.332s 00:31:10.322 sys 0m11.270s 00:31:10.322 06:41:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:10.322 06:41:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:10.322 ************************************ 00:31:10.322 END TEST nvmf_identify_kernel_target 00:31:10.322 ************************************ 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.582 ************************************ 00:31:10.582 START TEST nvmf_auth_host 00:31:10.582 ************************************ 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:10.582 * Looking for test storage... 00:31:10.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.582 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:10.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.583 --rc genhtml_branch_coverage=1 00:31:10.583 --rc genhtml_function_coverage=1 00:31:10.583 --rc genhtml_legend=1 00:31:10.583 --rc geninfo_all_blocks=1 00:31:10.583 --rc geninfo_unexecuted_blocks=1 00:31:10.583 00:31:10.583 ' 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:10.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.583 --rc genhtml_branch_coverage=1 00:31:10.583 --rc genhtml_function_coverage=1 00:31:10.583 --rc genhtml_legend=1 00:31:10.583 --rc geninfo_all_blocks=1 00:31:10.583 --rc geninfo_unexecuted_blocks=1 00:31:10.583 00:31:10.583 ' 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:10.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.583 --rc genhtml_branch_coverage=1 00:31:10.583 --rc genhtml_function_coverage=1 00:31:10.583 --rc genhtml_legend=1 00:31:10.583 --rc geninfo_all_blocks=1 00:31:10.583 --rc geninfo_unexecuted_blocks=1 00:31:10.583 00:31:10.583 ' 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:10.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.583 --rc genhtml_branch_coverage=1 00:31:10.583 --rc genhtml_function_coverage=1 00:31:10.583 --rc genhtml_legend=1 00:31:10.583 --rc geninfo_all_blocks=1 00:31:10.583 --rc geninfo_unexecuted_blocks=1 00:31:10.583 00:31:10.583 ' 00:31:10.583 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:10.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.844 06:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:18.981 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:18.981 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:18.981 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:18.981 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:18.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:18.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:31:18.981 00:31:18.981 --- 10.0.0.2 ping statistics --- 00:31:18.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.981 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:31:18.981 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:18.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:18.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:31:18.981 00:31:18.981 --- 10.0.0.1 ping statistics --- 00:31:18.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.982 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2992124 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2992124 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2992124 ']' 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:18.982 06:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e05527555a8b72989bfc1ed98f4b8466 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.epq 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e05527555a8b72989bfc1ed98f4b8466 0 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e05527555a8b72989bfc1ed98f4b8466 0 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e05527555a8b72989bfc1ed98f4b8466 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.epq 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.epq 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.epq 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4244efce1085d74ba340dc481bc7407f6713c5550c9b216530236f2055f4d30a 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.351 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4244efce1085d74ba340dc481bc7407f6713c5550c9b216530236f2055f4d30a 3 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4244efce1085d74ba340dc481bc7407f6713c5550c9b216530236f2055f4d30a 3 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4244efce1085d74ba340dc481bc7407f6713c5550c9b216530236f2055f4d30a 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.351 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.351 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.351 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7819b4ce61cafcc2fbcb68fcde1bdd1e502c178b262470bd 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fgE 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7819b4ce61cafcc2fbcb68fcde1bdd1e502c178b262470bd 0 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7819b4ce61cafcc2fbcb68fcde1bdd1e502c178b262470bd 0 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7819b4ce61cafcc2fbcb68fcde1bdd1e502c178b262470bd 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:19.243 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fgE 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fgE 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.fgE 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=810252b6f4cc186998774439021980b8918d051e4a33aaf0 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4IS 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 810252b6f4cc186998774439021980b8918d051e4a33aaf0 2 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 810252b6f4cc186998774439021980b8918d051e4a33aaf0 2 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=810252b6f4cc186998774439021980b8918d051e4a33aaf0 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4IS 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4IS 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.4IS 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:19.504 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7dd2c6e1e20d321fe261c84099b01284 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.FG4 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7dd2c6e1e20d321fe261c84099b01284 1 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7dd2c6e1e20d321fe261c84099b01284 1 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7dd2c6e1e20d321fe261c84099b01284 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.FG4 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.FG4 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FG4 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fecd1a9aeea93ebf0365d3d436d56347 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.p51 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fecd1a9aeea93ebf0365d3d436d56347 1 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fecd1a9aeea93ebf0365d3d436d56347 1 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fecd1a9aeea93ebf0365d3d436d56347 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.p51 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.p51 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.p51 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ffaf959dca8b56300fa7c47fbabd1fa445b527b54c22e21a 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uwW 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ffaf959dca8b56300fa7c47fbabd1fa445b527b54c22e21a 2 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ffaf959dca8b56300fa7c47fbabd1fa445b527b54c22e21a 2 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ffaf959dca8b56300fa7c47fbabd1fa445b527b54c22e21a 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:19.505 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uwW 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uwW 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.uwW 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a08d78845f3b7c18ae7a50edd1ce918c 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:19.765 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lHp 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a08d78845f3b7c18ae7a50edd1ce918c 0 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a08d78845f3b7c18ae7a50edd1ce918c 0 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a08d78845f3b7c18ae7a50edd1ce918c 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lHp 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lHp 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.lHp 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9ea6ceb43a6c6ca1e77ec8a256265935885d2dcf7d15553e23fedb3e5813fdde 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DYd 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9ea6ceb43a6c6ca1e77ec8a256265935885d2dcf7d15553e23fedb3e5813fdde 3 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9ea6ceb43a6c6ca1e77ec8a256265935885d2dcf7d15553e23fedb3e5813fdde 3 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9ea6ceb43a6c6ca1e77ec8a256265935885d2dcf7d15553e23fedb3e5813fdde 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DYd 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DYd 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DYd 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2992124 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2992124 ']' 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:19.766 06:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.epq 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.351 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.351 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fgE 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.4IS ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4IS 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FG4 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.p51 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p51 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.uwW 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.lHp ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.lHp 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DYd 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:20.028 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:20.029 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:20.029 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:20.029 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:20.029 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:20.029 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:31:20.029 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:20.029 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:20.290 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:20.290 06:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:23.592 Waiting for block devices as requested 00:31:23.592 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:23.592 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:23.592 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:23.855 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:23.855 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:23.855 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:24.116 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:24.116 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:24.116 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:24.376 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:24.376 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:24.636 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:24.636 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:24.636 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:24.636 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:24.896 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:24.896 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:25.837 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:25.837 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:25.837 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:25.838 No valid GPT data, bailing 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:25.838 06:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:25.838 00:31:25.838 Discovery Log Number of Records 2, Generation counter 2 00:31:25.838 =====Discovery Log Entry 0====== 00:31:25.838 trtype: tcp 00:31:25.838 adrfam: ipv4 00:31:25.838 subtype: current discovery subsystem 00:31:25.838 treq: not specified, sq flow control disable supported 00:31:25.838 portid: 1 00:31:25.838 trsvcid: 4420 00:31:25.838 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:25.838 traddr: 10.0.0.1 00:31:25.838 eflags: none 00:31:25.838 sectype: none 00:31:25.838 =====Discovery Log Entry 1====== 00:31:25.838 trtype: tcp 00:31:25.838 adrfam: ipv4 00:31:25.838 subtype: nvme subsystem 00:31:25.838 treq: not specified, sq flow control disable supported 00:31:25.838 portid: 1 00:31:25.838 trsvcid: 4420 00:31:25.838 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:25.838 traddr: 10.0.0.1 00:31:25.838 eflags: none 00:31:25.838 sectype: none 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.838 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.154 nvme0n1 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.154 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.415 nvme0n1 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.415 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.676 nvme0n1 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:26.676 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:26.677 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.677 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.937 nvme0n1 00:31:26.937 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.937 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.937 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.937 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.937 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.937 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.937 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.937 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.937 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.937 06:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.937 nvme0n1 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.937 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.197 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.197 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.197 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.197 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.197 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.197 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.197 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.198 nvme0n1 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.198 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.459 nvme0n1 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.459 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.719 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.720 nvme0n1 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.720 06:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:27.979 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.980 nvme0n1 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.980 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:28.240 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.241 nvme0n1 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.241 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:28.501 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.502 nvme0n1 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.502 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.763 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.763 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:28.763 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.764 06:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.033 nvme0n1 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.033 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.294 nvme0n1 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.294 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.554 nvme0n1 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.554 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.555 06:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.814 nvme0n1 00:31:29.814 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.814 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.815 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.815 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.815 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.815 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.075 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.334 nvme0n1 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.334 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.335 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.904 nvme0n1 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:30.904 06:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:30.904 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.904 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.165 nvme0n1 00:31:31.165 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.165 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.165 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.165 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.165 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.165 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.424 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.424 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.424 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.425 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.685 nvme0n1 00:31:31.685 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.685 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.685 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.685 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.685 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.685 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:31.946 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.947 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:31.947 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.947 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.947 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.947 06:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.947 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.207 nvme0n1 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.207 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.468 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.728 nvme0n1 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.728 06:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.668 nvme0n1 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.668 06:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.240 nvme0n1 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.240 06:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.811 nvme0n1 00:31:34.811 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.811 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.811 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.811 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.811 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.071 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.642 nvme0n1 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.642 06:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.583 nvme0n1 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.583 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.584 nvme0n1 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.584 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.844 nvme0n1 00:31:36.844 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.844 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.844 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.844 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.844 06:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:36.844 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.845 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.105 nvme0n1 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.105 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.106 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.366 nvme0n1 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.366 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.627 nvme0n1 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.627 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.887 nvme0n1 00:31:37.887 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.887 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.887 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.887 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.887 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.887 06:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.887 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.887 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.887 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.887 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.887 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.887 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.887 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.888 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.149 nvme0n1 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.149 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.409 nvme0n1 00:31:38.409 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.409 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.409 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.409 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.409 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.409 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.410 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.670 nvme0n1 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.670 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.930 nvme0n1 00:31:38.930 06:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.930 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.930 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.930 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.930 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.930 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.930 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.930 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.930 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.930 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.931 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.191 nvme0n1 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:39.191 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.192 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.452 nvme0n1 00:31:39.452 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.452 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.452 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.452 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.452 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.452 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.452 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.713 06:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.973 nvme0n1 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:39.973 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.974 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.237 nvme0n1 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.237 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.496 nvme0n1 00:31:40.496 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.496 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.496 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.496 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.496 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.496 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.496 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.496 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.497 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.497 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.497 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.497 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:40.497 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.497 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:40.497 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.757 06:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.018 nvme0n1 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.018 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.278 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.539 nvme0n1 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.539 06:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.110 nvme0n1 00:31:42.110 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.110 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.110 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.110 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.110 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.111 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.682 nvme0n1 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.682 06:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.975 nvme0n1 00:31:42.975 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.975 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.975 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.975 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.976 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.976 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.294 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.893 nvme0n1 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.893 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.894 06:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.894 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.465 nvme0n1 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:44.465 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:44.466 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:44.726 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.726 06:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.297 nvme0n1 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.297 06:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.867 nvme0n1 00:31:45.867 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.867 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.867 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.867 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.867 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.867 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.127 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.128 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.699 nvme0n1 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.699 06:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.960 nvme0n1 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.960 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.220 nvme0n1 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.220 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.221 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.481 nvme0n1 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.481 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.741 nvme0n1 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.742 nvme0n1 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.742 06:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.742 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.004 nvme0n1 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.004 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.266 nvme0n1 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.266 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.527 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.527 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.527 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.527 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.527 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.527 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.527 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:48.527 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.528 nvme0n1 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.528 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.789 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.790 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.790 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.790 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.790 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.790 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:48.790 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.790 06:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.790 nvme0n1 00:31:48.790 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.790 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.790 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.790 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.790 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.790 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.055 nvme0n1 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.055 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:49.316 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.317 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.578 nvme0n1 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.578 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.839 nvme0n1 00:31:49.839 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.839 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.839 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.839 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.839 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.839 06:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.839 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.099 nvme0n1 00:31:50.099 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.099 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.099 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.099 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.099 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.099 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.099 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.099 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.099 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.099 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.359 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.619 nvme0n1 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.620 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.880 nvme0n1 00:31:50.880 06:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.881 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.452 nvme0n1 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:51.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.453 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.022 nvme0n1 00:31:52.022 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.022 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.022 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.022 06:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.022 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.023 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.282 nvme0n1 00:31:52.282 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.282 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.282 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.282 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.282 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.282 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.282 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.282 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.282 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.282 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.542 06:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.803 nvme0n1 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.803 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.063 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.324 nvme0n1 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA1NTI3NTU1YThiNzI5ODliZmMxZWQ5OGY0Yjg0NjbtbUfE: 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: ]] 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDI0NGVmY2UxMDg1ZDc0YmEzNDBkYzQ4MWJjNzQwN2Y2NzEzYzU1NTBjOWIyMTY1MzAyMzZmMjA1NWY0ZDMwYR0B/xo=: 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:53.324 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.325 06:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.264 nvme0n1 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.264 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.834 nvme0n1 00:31:54.834 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.834 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.834 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.834 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.834 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.834 06:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.834 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.835 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.412 nvme0n1 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmZhZjk1OWRjYThiNTYzMDBmYTdjNDdmYmFiZDFmYTQ0NWI1MjdiNTRjMjJlMjFhwLSm4Q==: 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: ]] 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA4ZDc4ODQ1ZjNiN2MxOGFlN2E1MGVkZDFjZTkxOGMdKsVx: 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.673 06:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.242 nvme0n1 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWVhNmNlYjQzYTZjNmNhMWU3N2VjOGEyNTYyNjU5MzU4ODVkMmRjZjdkMTU1NTNlMjNmZWRiM2U1ODEzZmRkZeyd+RY=: 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.243 06:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.182 nvme0n1 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.182 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.183 request: 00:31:57.183 { 00:31:57.183 "name": "nvme0", 00:31:57.183 "trtype": "tcp", 00:31:57.183 "traddr": "10.0.0.1", 00:31:57.183 "adrfam": "ipv4", 00:31:57.183 "trsvcid": "4420", 00:31:57.183 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:57.183 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:57.183 "prchk_reftag": false, 00:31:57.183 "prchk_guard": false, 00:31:57.183 "hdgst": false, 00:31:57.183 "ddgst": false, 00:31:57.183 "allow_unrecognized_csi": false, 00:31:57.183 "method": "bdev_nvme_attach_controller", 00:31:57.183 "req_id": 1 00:31:57.183 } 00:31:57.183 Got JSON-RPC error response 00:31:57.183 response: 00:31:57.183 { 00:31:57.183 "code": -5, 00:31:57.183 "message": "Input/output error" 00:31:57.183 } 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.183 request: 00:31:57.183 { 00:31:57.183 "name": "nvme0", 00:31:57.183 "trtype": "tcp", 00:31:57.183 "traddr": "10.0.0.1", 00:31:57.183 "adrfam": "ipv4", 00:31:57.183 "trsvcid": "4420", 00:31:57.183 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:57.183 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:57.183 "prchk_reftag": false, 00:31:57.183 "prchk_guard": false, 00:31:57.183 "hdgst": false, 00:31:57.183 "ddgst": false, 00:31:57.183 "dhchap_key": "key2", 00:31:57.183 "allow_unrecognized_csi": false, 00:31:57.183 "method": "bdev_nvme_attach_controller", 00:31:57.183 "req_id": 1 00:31:57.183 } 00:31:57.183 Got JSON-RPC error response 00:31:57.183 response: 00:31:57.183 { 00:31:57.183 "code": -5, 00:31:57.183 "message": "Input/output error" 00:31:57.183 } 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.183 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.444 request: 00:31:57.444 { 00:31:57.444 "name": "nvme0", 00:31:57.444 "trtype": "tcp", 00:31:57.444 "traddr": "10.0.0.1", 00:31:57.444 "adrfam": "ipv4", 00:31:57.444 "trsvcid": "4420", 00:31:57.444 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:57.444 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:57.444 "prchk_reftag": false, 00:31:57.444 "prchk_guard": false, 00:31:57.444 "hdgst": false, 00:31:57.444 "ddgst": false, 00:31:57.444 "dhchap_key": "key1", 00:31:57.444 "dhchap_ctrlr_key": "ckey2", 00:31:57.444 "allow_unrecognized_csi": false, 00:31:57.444 "method": "bdev_nvme_attach_controller", 00:31:57.444 "req_id": 1 00:31:57.444 } 00:31:57.444 Got JSON-RPC error response 00:31:57.444 response: 00:31:57.444 { 00:31:57.444 "code": -5, 00:31:57.444 "message": "Input/output error" 00:31:57.444 } 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.444 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.445 nvme0n1 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.445 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.705 request: 00:31:57.705 { 00:31:57.705 "name": "nvme0", 00:31:57.705 "dhchap_key": "key1", 00:31:57.705 "dhchap_ctrlr_key": "ckey2", 00:31:57.705 "method": "bdev_nvme_set_keys", 00:31:57.705 "req_id": 1 00:31:57.705 } 00:31:57.705 Got JSON-RPC error response 00:31:57.705 response: 00:31:57.705 { 00:31:57.705 "code": -13, 00:31:57.705 "message": "Permission denied" 00:31:57.705 } 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:57.705 06:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:58.646 06:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.646 06:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:58.646 06:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.646 06:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.906 06:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.906 06:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:58.906 06:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:59.845 06:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.845 06:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:59.845 06:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.845 06:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.845 06:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgxOWI0Y2U2MWNhZmNjMmZiY2I2OGZjZGUxYmRkMWU1MDJjMTc4YjI2MjQ3MGJk7DqXbQ==: 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: ]] 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODEwMjUyYjZmNGNjMTg2OTk4Nzc0NDM5MDIxOTgwYjg5MThkMDUxZTRhMzNhYWYwfg+FVg==: 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.845 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.105 nvme0n1 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2RkMmM2ZTFlMjBkMzIxZmUyNjFjODQwOTliMDEyODR1h4ID: 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: ]] 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmVjZDFhOWFlZWE5M2ViZjAzNjVkM2Q0MzZkNTYzNDesQ/w6: 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.105 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.105 request: 00:32:00.105 { 00:32:00.105 "name": "nvme0", 00:32:00.105 "dhchap_key": "key2", 00:32:00.105 "dhchap_ctrlr_key": "ckey1", 00:32:00.105 "method": "bdev_nvme_set_keys", 00:32:00.105 "req_id": 1 00:32:00.105 } 00:32:00.105 Got JSON-RPC error response 00:32:00.105 response: 00:32:00.105 { 00:32:00.105 "code": -13, 00:32:00.105 "message": "Permission denied" 00:32:00.105 } 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:00.106 06:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:01.046 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.046 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:01.046 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.046 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.046 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:01.306 rmmod nvme_tcp 00:32:01.306 rmmod nvme_fabrics 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2992124 ']' 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2992124 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 2992124 ']' 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 2992124 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2992124 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2992124' 00:32:01.306 killing process with pid 2992124 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 2992124 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 2992124 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.306 06:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:03.847 06:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:07.146 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:07.146 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:07.717 06:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.epq /tmp/spdk.key-null.fgE /tmp/spdk.key-sha256.FG4 /tmp/spdk.key-sha384.uwW /tmp/spdk.key-sha512.DYd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:07.717 06:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:11.014 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:11.014 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:11.014 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:11.584 00:32:11.584 real 1m0.929s 00:32:11.584 user 0m54.632s 00:32:11.584 sys 0m16.153s 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.584 ************************************ 00:32:11.584 END TEST nvmf_auth_host 00:32:11.584 ************************************ 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.584 ************************************ 00:32:11.584 START TEST nvmf_digest 00:32:11.584 ************************************ 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:11.584 * Looking for test storage... 00:32:11.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.584 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:32:11.846 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.846 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:32:11.846 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:32:11.846 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.846 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:32:11.846 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.846 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.846 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:11.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.847 --rc genhtml_branch_coverage=1 00:32:11.847 --rc genhtml_function_coverage=1 00:32:11.847 --rc genhtml_legend=1 00:32:11.847 --rc geninfo_all_blocks=1 00:32:11.847 --rc geninfo_unexecuted_blocks=1 00:32:11.847 00:32:11.847 ' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:11.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.847 --rc genhtml_branch_coverage=1 00:32:11.847 --rc genhtml_function_coverage=1 00:32:11.847 --rc genhtml_legend=1 00:32:11.847 --rc geninfo_all_blocks=1 00:32:11.847 --rc geninfo_unexecuted_blocks=1 00:32:11.847 00:32:11.847 ' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:11.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.847 --rc genhtml_branch_coverage=1 00:32:11.847 --rc genhtml_function_coverage=1 00:32:11.847 --rc genhtml_legend=1 00:32:11.847 --rc geninfo_all_blocks=1 00:32:11.847 --rc geninfo_unexecuted_blocks=1 00:32:11.847 00:32:11.847 ' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:11.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.847 --rc genhtml_branch_coverage=1 00:32:11.847 --rc genhtml_function_coverage=1 00:32:11.847 --rc genhtml_legend=1 00:32:11.847 --rc geninfo_all_blocks=1 00:32:11.847 --rc geninfo_unexecuted_blocks=1 00:32:11.847 00:32:11.847 ' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:11.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:32:11.847 06:42:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:19.981 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:19.981 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:19.981 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:19.981 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:19.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:19.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:32:19.981 00:32:19.981 --- 10.0.0.2 ping statistics --- 00:32:19.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.981 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:19.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:19.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:32:19.981 00:32:19.981 --- 10.0.0.1 ping statistics --- 00:32:19.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.981 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:19.981 ************************************ 00:32:19.981 START TEST nvmf_digest_clean 00:32:19.981 ************************************ 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3009716 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3009716 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3009716 ']' 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:19.981 06:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:19.982 [2024-11-20 06:42:39.524248] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:32:19.982 [2024-11-20 06:42:39.524308] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.982 [2024-11-20 06:42:39.625547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.982 [2024-11-20 06:42:39.677121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:19.982 [2024-11-20 06:42:39.677185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:19.982 [2024-11-20 06:42:39.677194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:19.982 [2024-11-20 06:42:39.677201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:19.982 [2024-11-20 06:42:39.677207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:19.982 [2024-11-20 06:42:39.677940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.243 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:20.243 null0 00:32:20.243 [2024-11-20 06:42:40.498488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.504 [2024-11-20 06:42:40.522814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3009783 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3009783 /var/tmp/bperf.sock 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3009783 ']' 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:20.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:20.504 06:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:20.504 [2024-11-20 06:42:40.582571] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:32:20.504 [2024-11-20 06:42:40.582635] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3009783 ] 00:32:20.504 [2024-11-20 06:42:40.657834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.504 [2024-11-20 06:42:40.710595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.449 06:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:21.449 06:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:32:21.449 06:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:21.449 06:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:21.449 06:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:21.449 06:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.449 06:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.709 nvme0n1 00:32:21.709 06:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:21.709 06:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:21.969 Running I/O for 2 seconds... 00:32:23.851 19658.00 IOPS, 76.79 MiB/s [2024-11-20T05:42:44.130Z] 20258.00 IOPS, 79.13 MiB/s 00:32:23.851 Latency(us) 00:32:23.851 [2024-11-20T05:42:44.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.851 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:23.851 nvme0n1 : 2.00 20288.87 79.25 0.00 0.00 6303.08 2280.11 19660.80 00:32:23.851 [2024-11-20T05:42:44.130Z] =================================================================================================================== 00:32:23.851 [2024-11-20T05:42:44.130Z] Total : 20288.87 79.25 0.00 0.00 6303.08 2280.11 19660.80 00:32:23.851 { 00:32:23.851 "results": [ 00:32:23.851 { 00:32:23.851 "job": "nvme0n1", 00:32:23.851 "core_mask": "0x2", 00:32:23.851 "workload": "randread", 00:32:23.851 "status": "finished", 00:32:23.851 "queue_depth": 128, 00:32:23.851 "io_size": 4096, 00:32:23.851 "runtime": 2.003266, 00:32:23.851 "iops": 20288.868278101858, 00:32:23.851 "mibps": 79.25339171133538, 00:32:23.851 "io_failed": 0, 00:32:23.851 "io_timeout": 0, 00:32:23.851 "avg_latency_us": 6303.076248400748, 00:32:23.851 "min_latency_us": 2280.1066666666666, 00:32:23.851 "max_latency_us": 19660.8 00:32:23.851 } 00:32:23.851 ], 00:32:23.851 "core_count": 1 00:32:23.851 } 00:32:23.851 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:23.851 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:23.851 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:23.851 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:23.851 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:23.851 | select(.opcode=="crc32c") 00:32:23.851 | "\(.module_name) \(.executed)"' 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3009783 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3009783 ']' 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3009783 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3009783 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3009783' 00:32:24.111 killing process with pid 3009783 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3009783 00:32:24.111 Received shutdown signal, test time was about 2.000000 seconds 00:32:24.111 00:32:24.111 Latency(us) 00:32:24.111 [2024-11-20T05:42:44.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.111 [2024-11-20T05:42:44.390Z] =================================================================================================================== 00:32:24.111 [2024-11-20T05:42:44.390Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.111 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3009783 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3010608 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3010608 /var/tmp/bperf.sock 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3010608 ']' 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:24.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:24.371 06:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:24.371 [2024-11-20 06:42:44.488573] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:32:24.371 [2024-11-20 06:42:44.488629] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010608 ] 00:32:24.371 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:24.371 Zero copy mechanism will not be used. 00:32:24.371 [2024-11-20 06:42:44.569802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.371 [2024-11-20 06:42:44.599112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.312 06:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:25.312 06:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:32:25.312 06:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:25.312 06:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:25.312 06:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:25.312 06:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:25.312 06:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:25.883 nvme0n1 00:32:25.883 06:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:25.883 06:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:25.883 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:25.883 Zero copy mechanism will not be used. 00:32:25.883 Running I/O for 2 seconds... 00:32:27.763 3122.00 IOPS, 390.25 MiB/s [2024-11-20T05:42:48.042Z] 3579.50 IOPS, 447.44 MiB/s 00:32:27.763 Latency(us) 00:32:27.763 [2024-11-20T05:42:48.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.763 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:27.763 nvme0n1 : 2.00 3580.72 447.59 0.00 0.00 4466.15 730.45 8901.97 00:32:27.763 [2024-11-20T05:42:48.042Z] =================================================================================================================== 00:32:27.763 [2024-11-20T05:42:48.042Z] Total : 3580.72 447.59 0.00 0.00 4466.15 730.45 8901.97 00:32:27.763 { 00:32:27.763 "results": [ 00:32:27.763 { 00:32:27.763 "job": "nvme0n1", 00:32:27.763 "core_mask": "0x2", 00:32:27.763 "workload": "randread", 00:32:27.763 "status": "finished", 00:32:27.763 "queue_depth": 16, 00:32:27.763 "io_size": 131072, 00:32:27.763 "runtime": 2.003788, 00:32:27.763 "iops": 3580.7181198809453, 00:32:27.763 "mibps": 447.58976498511817, 00:32:27.763 "io_failed": 0, 00:32:27.763 "io_timeout": 0, 00:32:27.763 "avg_latency_us": 4466.15352195122, 00:32:27.763 "min_latency_us": 730.4533333333334, 00:32:27.763 "max_latency_us": 8901.973333333333 00:32:27.763 } 00:32:27.763 ], 00:32:27.763 "core_count": 1 00:32:27.763 } 00:32:27.763 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:27.763 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:27.763 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:27.763 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:27.763 | select(.opcode=="crc32c") 00:32:27.763 | "\(.module_name) \(.executed)"' 00:32:27.764 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3010608 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3010608 ']' 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3010608 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3010608 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3010608' 00:32:28.025 killing process with pid 3010608 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3010608 00:32:28.025 Received shutdown signal, test time was about 2.000000 seconds 00:32:28.025 00:32:28.025 Latency(us) 00:32:28.025 [2024-11-20T05:42:48.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.025 [2024-11-20T05:42:48.304Z] =================================================================================================================== 00:32:28.025 [2024-11-20T05:42:48.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:28.025 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3010608 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3011433 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3011433 /var/tmp/bperf.sock 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3011433 ']' 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:28.286 06:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:28.286 [2024-11-20 06:42:48.445987] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:32:28.286 [2024-11-20 06:42:48.446041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011433 ] 00:32:28.286 [2024-11-20 06:42:48.530934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.286 [2024-11-20 06:42:48.559834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.226 06:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:29.226 06:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:32:29.226 06:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:29.226 06:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:29.226 06:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:29.226 06:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.226 06:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.487 nvme0n1 00:32:29.488 06:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:29.488 06:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:29.749 Running I/O for 2 seconds... 00:32:31.636 30070.00 IOPS, 117.46 MiB/s [2024-11-20T05:42:51.915Z] 29891.00 IOPS, 116.76 MiB/s 00:32:31.636 Latency(us) 00:32:31.636 [2024-11-20T05:42:51.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.636 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.636 nvme0n1 : 2.00 29889.49 116.76 0.00 0.00 4275.69 1774.93 10813.44 00:32:31.636 [2024-11-20T05:42:51.915Z] =================================================================================================================== 00:32:31.636 [2024-11-20T05:42:51.915Z] Total : 29889.49 116.76 0.00 0.00 4275.69 1774.93 10813.44 00:32:31.636 { 00:32:31.636 "results": [ 00:32:31.636 { 00:32:31.636 "job": "nvme0n1", 00:32:31.636 "core_mask": "0x2", 00:32:31.636 "workload": "randwrite", 00:32:31.636 "status": "finished", 00:32:31.636 "queue_depth": 128, 00:32:31.636 "io_size": 4096, 00:32:31.636 "runtime": 2.004116, 00:32:31.636 "iops": 29889.48743485906, 00:32:31.636 "mibps": 116.7558102924182, 00:32:31.636 "io_failed": 0, 00:32:31.636 "io_timeout": 0, 00:32:31.636 "avg_latency_us": 4275.691241472183, 00:32:31.636 "min_latency_us": 1774.9333333333334, 00:32:31.636 "max_latency_us": 10813.44 00:32:31.636 } 00:32:31.636 ], 00:32:31.636 "core_count": 1 00:32:31.636 } 00:32:31.636 06:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:31.636 06:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:31.636 06:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:31.636 06:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:31.636 | select(.opcode=="crc32c") 00:32:31.636 | "\(.module_name) \(.executed)"' 00:32:31.636 06:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3011433 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3011433 ']' 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3011433 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3011433 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3011433' 00:32:31.980 killing process with pid 3011433 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3011433 00:32:31.980 Received shutdown signal, test time was about 2.000000 seconds 00:32:31.980 00:32:31.980 Latency(us) 00:32:31.980 [2024-11-20T05:42:52.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.980 [2024-11-20T05:42:52.259Z] =================================================================================================================== 00:32:31.980 [2024-11-20T05:42:52.259Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:31.980 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3011433 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3012121 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3012121 /var/tmp/bperf.sock 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3012121 ']' 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:32.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:32.285 06:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:32.285 [2024-11-20 06:42:52.292062] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:32:32.285 [2024-11-20 06:42:52.292117] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3012121 ] 00:32:32.285 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:32.285 Zero copy mechanism will not be used. 00:32:32.285 [2024-11-20 06:42:52.376125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.285 [2024-11-20 06:42:52.405109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.863 06:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:32.863 06:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:32:32.863 06:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:32.863 06:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:32.863 06:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:33.124 06:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:33.124 06:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:33.383 nvme0n1 00:32:33.383 06:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:33.383 06:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:33.383 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:33.383 Zero copy mechanism will not be used. 00:32:33.383 Running I/O for 2 seconds... 00:32:35.708 4137.00 IOPS, 517.12 MiB/s [2024-11-20T05:42:55.987Z] 4945.00 IOPS, 618.12 MiB/s 00:32:35.708 Latency(us) 00:32:35.708 [2024-11-20T05:42:55.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.708 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:35.708 nvme0n1 : 2.01 4939.54 617.44 0.00 0.00 3232.92 1208.32 7973.55 00:32:35.708 [2024-11-20T05:42:55.987Z] =================================================================================================================== 00:32:35.708 [2024-11-20T05:42:55.987Z] Total : 4939.54 617.44 0.00 0.00 3232.92 1208.32 7973.55 00:32:35.708 { 00:32:35.708 "results": [ 00:32:35.708 { 00:32:35.708 "job": "nvme0n1", 00:32:35.708 "core_mask": "0x2", 00:32:35.708 "workload": "randwrite", 00:32:35.708 "status": "finished", 00:32:35.708 "queue_depth": 16, 00:32:35.708 "io_size": 131072, 00:32:35.708 "runtime": 2.006057, 00:32:35.708 "iops": 4939.540601288996, 00:32:35.708 "mibps": 617.4425751611245, 00:32:35.708 "io_failed": 0, 00:32:35.708 "io_timeout": 0, 00:32:35.708 "avg_latency_us": 3232.924078447203, 00:32:35.708 "min_latency_us": 1208.32, 00:32:35.708 "max_latency_us": 7973.546666666667 00:32:35.708 } 00:32:35.708 ], 00:32:35.708 "core_count": 1 00:32:35.708 } 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:35.708 | select(.opcode=="crc32c") 00:32:35.708 | "\(.module_name) \(.executed)"' 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3012121 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3012121 ']' 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3012121 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:35.708 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3012121 00:32:35.709 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:35.709 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:35.709 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3012121' 00:32:35.709 killing process with pid 3012121 00:32:35.709 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3012121 00:32:35.709 Received shutdown signal, test time was about 2.000000 seconds 00:32:35.709 00:32:35.709 Latency(us) 00:32:35.709 [2024-11-20T05:42:55.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.709 [2024-11-20T05:42:55.988Z] =================================================================================================================== 00:32:35.709 [2024-11-20T05:42:55.988Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:35.709 06:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3012121 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3009716 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3009716 ']' 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3009716 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3009716 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3009716' 00:32:35.970 killing process with pid 3009716 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3009716 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3009716 00:32:35.970 00:32:35.970 real 0m16.725s 00:32:35.970 user 0m33.284s 00:32:35.970 sys 0m3.526s 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:35.970 ************************************ 00:32:35.970 END TEST nvmf_digest_clean 00:32:35.970 ************************************ 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:35.970 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:36.231 ************************************ 00:32:36.231 START TEST nvmf_digest_error 00:32:36.231 ************************************ 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3012841 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3012841 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3012841 ']' 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:36.231 06:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.231 [2024-11-20 06:42:56.333365] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:32:36.231 [2024-11-20 06:42:56.333442] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.231 [2024-11-20 06:42:56.426443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.231 [2024-11-20 06:42:56.462400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.231 [2024-11-20 06:42:56.462432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.231 [2024-11-20 06:42:56.462438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.231 [2024-11-20 06:42:56.462443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.231 [2024-11-20 06:42:56.462447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.231 [2024-11-20 06:42:56.463013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.174 [2024-11-20 06:42:57.160962] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.174 null0 00:32:37.174 [2024-11-20 06:42:57.239876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.174 [2024-11-20 06:42:57.264085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3013183 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3013183 /var/tmp/bperf.sock 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3013183 ']' 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:37.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:37.174 06:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.174 [2024-11-20 06:42:57.320853] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:32:37.174 [2024-11-20 06:42:57.320901] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013183 ] 00:32:37.174 [2024-11-20 06:42:57.401416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.174 [2024-11-20 06:42:57.431072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.116 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:38.116 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:32:38.116 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:38.116 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:38.116 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:38.116 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.116 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:38.116 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.116 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:38.116 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:38.378 nvme0n1 00:32:38.378 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:38.378 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.378 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:38.378 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.378 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:38.378 06:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:38.378 Running I/O for 2 seconds... 00:32:38.378 [2024-11-20 06:42:58.640530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.378 [2024-11-20 06:42:58.640565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.378 [2024-11-20 06:42:58.640574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.378 [2024-11-20 06:42:58.649787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.378 [2024-11-20 06:42:58.649807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.378 [2024-11-20 06:42:58.649814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.661793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.661813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.661820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.672293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.672311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.672318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.680696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.680714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.680721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.690235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.690253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.690260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.699204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.699222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.699228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.708237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.708260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.708266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.718866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.718884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.718890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.727326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.727344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.727351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.737444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.737462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.737469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.747416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.747434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.747440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.755282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.755300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.755306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.767437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.767454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.640 [2024-11-20 06:42:58.767461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.640 [2024-11-20 06:42:58.779486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.640 [2024-11-20 06:42:58.779504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.779510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.791611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.791628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.791635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.802528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.802546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.802552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.812104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.812121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.812128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.823731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.823749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.823756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.832354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.832372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.832378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.841756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.841773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.841779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.850366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.850383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.850389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.859017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.859034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.859041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.868299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.868317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.868323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.880410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.880427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.880437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.887936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.887953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.887959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.897675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.897692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.897699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.907149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.907170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.907176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.641 [2024-11-20 06:42:58.915473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.641 [2024-11-20 06:42:58.915490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.641 [2024-11-20 06:42:58.915497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:58.924136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:58.924154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:58.924165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:58.933842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:58.933859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:58.933865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:58.942718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:58.942735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:58.942742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:58.950929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:58.950946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:58.950953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:58.959953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:58.959974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:58.959981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:58.969085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:58.969102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:58.969109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:58.977963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:58.977980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:58.977987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:58.986543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:58.986560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:58.986567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:58.995191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:58.995209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:58.995215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.005121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.005139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.005145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.014395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.014413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.014419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.022777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.022795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.022802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.031515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.031533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.031539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.040434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.040453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.040461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.048999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.049018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.049024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.058634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.058651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.058658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.067260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.067277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.067284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.075217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.075234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.075241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.084576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.084594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.084601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.094166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.094183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.903 [2024-11-20 06:42:59.094190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.903 [2024-11-20 06:42:59.104569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.903 [2024-11-20 06:42:59.104587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.904 [2024-11-20 06:42:59.104593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.904 [2024-11-20 06:42:59.112018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.904 [2024-11-20 06:42:59.112035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.904 [2024-11-20 06:42:59.112046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.904 [2024-11-20 06:42:59.121529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.904 [2024-11-20 06:42:59.121546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.904 [2024-11-20 06:42:59.121552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.904 [2024-11-20 06:42:59.130826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.904 [2024-11-20 06:42:59.130844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.904 [2024-11-20 06:42:59.130850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.904 [2024-11-20 06:42:59.138459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.904 [2024-11-20 06:42:59.138476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.904 [2024-11-20 06:42:59.138482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.904 [2024-11-20 06:42:59.147753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.904 [2024-11-20 06:42:59.147770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.904 [2024-11-20 06:42:59.147777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.904 [2024-11-20 06:42:59.156202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.904 [2024-11-20 06:42:59.156220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.904 [2024-11-20 06:42:59.156227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.904 [2024-11-20 06:42:59.166184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.904 [2024-11-20 06:42:59.166201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.904 [2024-11-20 06:42:59.166207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.904 [2024-11-20 06:42:59.175011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:38.904 [2024-11-20 06:42:59.175028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.904 [2024-11-20 06:42:59.175034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.165 [2024-11-20 06:42:59.184045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.165 [2024-11-20 06:42:59.184063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.165 [2024-11-20 06:42:59.184069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.165 [2024-11-20 06:42:59.192396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.165 [2024-11-20 06:42:59.192413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.165 [2024-11-20 06:42:59.192420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.165 [2024-11-20 06:42:59.201365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.165 [2024-11-20 06:42:59.201382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.165 [2024-11-20 06:42:59.201389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.165 [2024-11-20 06:42:59.210038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.165 [2024-11-20 06:42:59.210056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.165 [2024-11-20 06:42:59.210062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.165 [2024-11-20 06:42:59.219252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.165 [2024-11-20 06:42:59.219269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.165 [2024-11-20 06:42:59.219276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.165 [2024-11-20 06:42:59.228578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.165 [2024-11-20 06:42:59.228596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.165 [2024-11-20 06:42:59.228604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.165 [2024-11-20 06:42:59.237540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.165 [2024-11-20 06:42:59.237557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.165 [2024-11-20 06:42:59.237564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.165 [2024-11-20 06:42:59.246341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.165 [2024-11-20 06:42:59.246358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.165 [2024-11-20 06:42:59.246364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.165 [2024-11-20 06:42:59.254969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.254986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.254992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.264185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.264203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.264212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.273016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.273033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.273039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.281526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.281544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.281551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.290344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.290362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.290369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.298816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.298833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.298839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.307940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.307957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.307963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.316584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.316601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.316607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.326748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.326765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.326772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.335615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.335632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.335639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.344024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.344044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.344051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.352846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.352863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.352869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.361389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.361406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.361413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.370298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.370315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.370322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.379395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.379413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.379420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.388123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.388140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.388146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.397962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.397979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.397986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.406322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.406339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.406345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.414318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.414335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.414342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.424277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.424294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.424301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.166 [2024-11-20 06:42:59.434154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.166 [2024-11-20 06:42:59.434176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.166 [2024-11-20 06:42:59.434182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-11-20 06:42:59.444258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.428 [2024-11-20 06:42:59.444275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-11-20 06:42:59.444281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-11-20 06:42:59.453620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.428 [2024-11-20 06:42:59.453637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-11-20 06:42:59.453644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-11-20 06:42:59.461345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.428 [2024-11-20 06:42:59.461362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-11-20 06:42:59.461369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-11-20 06:42:59.471131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.428 [2024-11-20 06:42:59.471148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-11-20 06:42:59.471155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-11-20 06:42:59.478919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.428 [2024-11-20 06:42:59.478937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-11-20 06:42:59.478944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-11-20 06:42:59.489352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.428 [2024-11-20 06:42:59.489368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-11-20 06:42:59.489375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-11-20 06:42:59.497825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.428 [2024-11-20 06:42:59.497843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-11-20 06:42:59.497852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-11-20 06:42:59.506282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.428 [2024-11-20 06:42:59.506300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-11-20 06:42:59.506306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-11-20 06:42:59.515700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.428 [2024-11-20 06:42:59.515717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-11-20 06:42:59.515724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.524642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.524659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.524665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.533918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.533935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.533942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.542913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.542930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.542937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.551545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.551563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.551570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.560043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.560060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.560066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.570196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.570213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.570219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.579185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.579206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.579213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.587347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.587364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.587370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.596717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.596735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.596741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.607249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.607266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.607272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.616727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.616744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.616751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 27305.00 IOPS, 106.66 MiB/s [2024-11-20T05:42:59.708Z] [2024-11-20 06:42:59.626623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.626641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.626647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.637235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.637252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.637258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.647422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.647439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.647445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.656193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.656211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.656221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.668136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.668154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.668165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.679653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.679671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.679678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.687155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.687176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.687183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.429 [2024-11-20 06:42:59.696606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.429 [2024-11-20 06:42:59.696623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.429 [2024-11-20 06:42:59.696630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.706314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.706332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.706339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.714309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.714327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.714334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.724562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.724579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.724585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.732420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.732437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.732443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.742284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.742305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.742312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.751226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.751243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.751249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.760282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.760300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.760306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.768501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.768519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.768525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.777467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.777484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.777491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.786090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.786109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.786115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.795073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.795091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.795097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.804621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.804638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.804644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.812833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.812851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.812858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.820816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.820834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.820840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.829882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.829902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.829908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.839213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.839231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.839237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.692 [2024-11-20 06:42:59.847583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.692 [2024-11-20 06:42:59.847601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.692 [2024-11-20 06:42:59.847607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.857229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.857247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.857254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.865858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.865876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.865883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.874909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.874927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.874933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.883096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.883115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.883121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.893895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.893911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.893921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.902092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.902111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.902117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.910530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.910548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.910554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.920315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.920333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.920340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.930553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.930571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.930578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.938150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.938172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.938178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.948235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.948253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.948260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.957139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.957156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.957168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.693 [2024-11-20 06:42:59.965363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.693 [2024-11-20 06:42:59.965380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.693 [2024-11-20 06:42:59.965387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.954 [2024-11-20 06:42:59.974573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.954 [2024-11-20 06:42:59.974594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.954 [2024-11-20 06:42:59.974601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.954 [2024-11-20 06:42:59.982356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.954 [2024-11-20 06:42:59.982374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.954 [2024-11-20 06:42:59.982381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.954 [2024-11-20 06:42:59.992364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.954 [2024-11-20 06:42:59.992384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:42:59.992390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.002693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.002712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.002719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.010772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.010789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.010796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.019185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.019203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.019209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.028322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.028340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.028346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.037487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.037504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.037511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.046232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.046250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.046257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.056535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.056553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.056560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.067338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.067356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.067363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.075662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.075680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.075687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.084970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.084988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.084995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.094795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.094813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.094820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.104217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.104234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.104241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.116613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.116630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.116637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.124797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.124814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.124821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.133884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.133901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.133913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.142886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.142904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.142910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.152220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.152237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.152244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.160621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.160638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.160645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.169803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.169821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.169827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.179070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.179088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.179095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.955 [2024-11-20 06:43:00.187201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.955 [2024-11-20 06:43:00.187218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.955 [2024-11-20 06:43:00.187225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.956 [2024-11-20 06:43:00.197449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.956 [2024-11-20 06:43:00.197467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.956 [2024-11-20 06:43:00.197474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.956 [2024-11-20 06:43:00.206756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.956 [2024-11-20 06:43:00.206774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.956 [2024-11-20 06:43:00.206781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.956 [2024-11-20 06:43:00.215702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.956 [2024-11-20 06:43:00.215719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.956 [2024-11-20 06:43:00.215725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.956 [2024-11-20 06:43:00.224277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:39.956 [2024-11-20 06:43:00.224294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.956 [2024-11-20 06:43:00.224300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.217 [2024-11-20 06:43:00.232498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.217 [2024-11-20 06:43:00.232516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.217 [2024-11-20 06:43:00.232522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.217 [2024-11-20 06:43:00.242358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.217 [2024-11-20 06:43:00.242376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.217 [2024-11-20 06:43:00.242382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.217 [2024-11-20 06:43:00.251949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.217 [2024-11-20 06:43:00.251967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.217 [2024-11-20 06:43:00.251974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.217 [2024-11-20 06:43:00.260899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.217 [2024-11-20 06:43:00.260917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.217 [2024-11-20 06:43:00.260924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.217 [2024-11-20 06:43:00.269411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.217 [2024-11-20 06:43:00.269428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.217 [2024-11-20 06:43:00.269435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.217 [2024-11-20 06:43:00.277918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.217 [2024-11-20 06:43:00.277936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.217 [2024-11-20 06:43:00.277942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.217 [2024-11-20 06:43:00.287170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.217 [2024-11-20 06:43:00.287188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.217 [2024-11-20 06:43:00.287198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.295313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.295331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.295337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.305706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.305724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.305731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.316433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.316451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.316457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.324289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.324307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.324314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.334232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.334251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.334257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.343525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.343542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.343549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.351923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.351941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.351947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.360435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.360452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.360459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.369695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.369717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.369723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.378139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.378161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.378168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.388174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.388191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.388197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.396796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.396814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.396820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.405332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.405351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.405357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.414860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.414877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.414884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.423777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.423795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.423801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.431860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.431878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.431884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.441060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.441079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.441085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.449345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.449363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.449369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.459114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.459132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.459138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.468395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.468412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.468419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.477023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.477040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.477047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.218 [2024-11-20 06:43:00.485221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.218 [2024-11-20 06:43:00.485238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.218 [2024-11-20 06:43:00.485245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.479 [2024-11-20 06:43:00.495146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.479 [2024-11-20 06:43:00.495168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.479 [2024-11-20 06:43:00.495175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.479 [2024-11-20 06:43:00.502577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.479 [2024-11-20 06:43:00.502594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.479 [2024-11-20 06:43:00.502601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.479 [2024-11-20 06:43:00.512253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.479 [2024-11-20 06:43:00.512271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.479 [2024-11-20 06:43:00.512277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.479 [2024-11-20 06:43:00.521516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.479 [2024-11-20 06:43:00.521534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.479 [2024-11-20 06:43:00.521544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 [2024-11-20 06:43:00.529871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.529889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.529896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 [2024-11-20 06:43:00.539366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.539383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.539390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 [2024-11-20 06:43:00.547588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.547605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.547612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 [2024-11-20 06:43:00.556462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.556478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.556484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 [2024-11-20 06:43:00.566699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.566716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.566723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 [2024-11-20 06:43:00.576368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.576386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.576392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 [2024-11-20 06:43:00.585082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.585099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.585106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 [2024-11-20 06:43:00.595671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.595689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.595696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 [2024-11-20 06:43:00.604530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.604550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.604557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 [2024-11-20 06:43:00.614074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.614092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.614099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 27619.00 IOPS, 107.89 MiB/s [2024-11-20T05:43:00.759Z] [2024-11-20 06:43:00.622223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8940e0) 00:32:40.480 [2024-11-20 06:43:00.622240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.480 [2024-11-20 06:43:00.622247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.480 00:32:40.480 Latency(us) 00:32:40.480 [2024-11-20T05:43:00.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.480 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:40.480 nvme0n1 : 2.00 27634.45 107.95 0.00 0.00 4626.11 2198.19 20206.93 00:32:40.480 [2024-11-20T05:43:00.759Z] =================================================================================================================== 00:32:40.480 [2024-11-20T05:43:00.759Z] Total : 27634.45 107.95 0.00 0.00 4626.11 2198.19 20206.93 00:32:40.480 { 00:32:40.480 "results": [ 00:32:40.480 { 00:32:40.480 "job": "nvme0n1", 00:32:40.480 "core_mask": "0x2", 00:32:40.480 "workload": "randread", 00:32:40.480 "status": "finished", 00:32:40.480 "queue_depth": 128, 00:32:40.480 "io_size": 4096, 00:32:40.480 "runtime": 2.003514, 00:32:40.480 "iops": 27634.446277889747, 00:32:40.480 "mibps": 107.94705577300682, 00:32:40.480 "io_failed": 0, 00:32:40.480 "io_timeout": 0, 00:32:40.480 "avg_latency_us": 4626.112749822394, 00:32:40.480 "min_latency_us": 2198.1866666666665, 00:32:40.480 "max_latency_us": 20206.933333333334 00:32:40.480 } 00:32:40.480 ], 00:32:40.480 "core_count": 1 00:32:40.480 } 00:32:40.480 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:40.480 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:40.480 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:40.480 | .driver_specific 00:32:40.480 | .nvme_error 00:32:40.480 | .status_code 00:32:40.480 | .command_transient_transport_error' 00:32:40.480 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3013183 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3013183 ']' 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3013183 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3013183 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3013183' 00:32:40.741 killing process with pid 3013183 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3013183 00:32:40.741 Received shutdown signal, test time was about 2.000000 seconds 00:32:40.741 00:32:40.741 Latency(us) 00:32:40.741 [2024-11-20T05:43:01.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.741 [2024-11-20T05:43:01.020Z] =================================================================================================================== 00:32:40.741 [2024-11-20T05:43:01.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3013183 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3013867 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3013867 /var/tmp/bperf.sock 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3013867 ']' 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:40.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:40.741 06:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:41.002 [2024-11-20 06:43:01.044823] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:32:41.002 [2024-11-20 06:43:01.044881] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013867 ] 00:32:41.002 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:41.002 Zero copy mechanism will not be used. 00:32:41.002 [2024-11-20 06:43:01.129798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.002 [2024-11-20 06:43:01.159201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.572 06:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:41.572 06:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:32:41.572 06:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:41.572 06:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:41.832 06:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:41.832 06:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.832 06:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:41.832 06:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.832 06:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.832 06:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:42.092 nvme0n1 00:32:42.092 06:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:42.092 06:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.092 06:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:42.092 06:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.092 06:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:42.092 06:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:42.353 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:42.353 Zero copy mechanism will not be used. 00:32:42.353 Running I/O for 2 seconds... 00:32:42.353 [2024-11-20 06:43:02.397819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.353 [2024-11-20 06:43:02.397854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.353 [2024-11-20 06:43:02.397863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.353 [2024-11-20 06:43:02.408106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.353 [2024-11-20 06:43:02.408131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.353 [2024-11-20 06:43:02.408139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.353 [2024-11-20 06:43:02.419613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.353 [2024-11-20 06:43:02.419633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.353 [2024-11-20 06:43:02.419641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.353 [2024-11-20 06:43:02.431660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.353 [2024-11-20 06:43:02.431678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.353 [2024-11-20 06:43:02.431685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.353 [2024-11-20 06:43:02.440027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.353 [2024-11-20 06:43:02.440046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.353 [2024-11-20 06:43:02.440057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.353 [2024-11-20 06:43:02.444874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.353 [2024-11-20 06:43:02.444894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.444900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.449331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.449350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.449357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.453762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.453780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.453786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.458117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.458135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.458141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.463740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.463759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.463766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.468161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.468180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.468186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.474718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.474737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.474743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.479764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.479782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.479789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.484261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.484283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.484290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.492880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.492899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.492906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.497478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.497496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.497503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.502090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.502107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.502114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.507537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.507555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.507562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.514648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.514667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.514673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.519292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.519310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.519317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.524627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.524645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.524652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.529024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.529042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.529049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.533972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.533989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.533996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.539825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.539845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.539851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.549148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.549170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.549177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.557861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.557879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.557886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.569115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.569134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.569140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.576756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.576774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.576780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.581127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.581146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.581153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.585867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.585885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.585892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.592580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.592599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.592609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.603321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.354 [2024-11-20 06:43:02.603340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.354 [2024-11-20 06:43:02.603346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.354 [2024-11-20 06:43:02.614587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.355 [2024-11-20 06:43:02.614605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.355 [2024-11-20 06:43:02.614611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.355 [2024-11-20 06:43:02.626092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.355 [2024-11-20 06:43:02.626111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.355 [2024-11-20 06:43:02.626118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.637055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.637074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.637081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.644332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.644350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.644357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.655471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.655490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.655497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.665235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.665253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.665260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.672563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.672582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.672589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.681392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.681414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.681421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.686388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.686407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.686413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.690905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.690922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.690929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.695454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.695472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.695479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.699887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.699905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.699911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.707147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.707171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.707177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.711872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.711890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.711897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.716288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.716306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.716313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.727712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.727730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.727736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.738711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.738730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.738736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.748773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.748792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.748798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.752881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.752900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.752906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.757279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.757298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.757304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.766639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.766658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.766664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.771017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.771036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.771042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.777920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.616 [2024-11-20 06:43:02.777939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.616 [2024-11-20 06:43:02.777945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.616 [2024-11-20 06:43:02.786560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.786578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.786585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.790945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.790963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.790972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.795231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.795249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.795256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.802287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.802306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.802312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.812085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.812104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.812111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.820987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.821006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.821013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.825310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.825329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.825336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.834909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.834927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.834934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.844736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.844754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.844761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.852634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.852653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.852659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.858600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.858622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.858628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.864491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.864510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.864516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.872389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.872407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.872414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.876767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.876785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.876792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.881106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.881123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.881130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.885449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.885467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.885474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.617 [2024-11-20 06:43:02.889874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.617 [2024-11-20 06:43:02.889892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.617 [2024-11-20 06:43:02.889899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.894355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.894374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.894380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.898384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.898403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.898409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.908622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.908640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.908646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.917225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.917244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.917251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.922981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.923000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.923006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.927803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.927821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.927828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.932543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.932561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.932568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.940579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.940598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.940604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.952027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.952046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.952053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.958668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.958687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.958693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.963270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.963289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.963299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.970337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.970356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.970362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.977830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.977849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.977855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.989142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.989167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.989174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:02.998988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:02.999007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:02.999013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:03.010664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:03.010683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:03.010689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:03.019043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:03.019061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:03.019068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:03.022883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:03.022902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:03.022908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:03.027471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:03.027490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:03.027496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:03.036221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:03.036242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:03.036248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.879 [2024-11-20 06:43:03.043824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.879 [2024-11-20 06:43:03.043843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.879 [2024-11-20 06:43:03.043849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.050393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.050411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.050418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.054585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.054604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.054610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.059257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.059275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.059281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.063823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.063842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.063848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.072145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.072168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.072175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.084601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.084619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.084626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.096956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.096974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.096980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.109501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.109520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.109526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.121698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.121716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.121722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.134028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.134046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.134053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.140780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.140797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.140804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.880 [2024-11-20 06:43:03.149705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:42.880 [2024-11-20 06:43:03.149724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.880 [2024-11-20 06:43:03.149730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.155225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.155244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.155250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.160579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.160598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.160605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.165279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.165299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.165306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.173407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.173430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.173437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.181670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.181688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.181695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.193387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.193406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.193413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.203686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.203705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.203711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.213966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.213984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.213991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.223293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.223312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.223318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.232986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.233004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.233010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.237884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.237902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.237908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.242227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.242245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.242251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.246623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.246641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.246647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.251144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.251168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.251175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.255597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.255615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.255621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.260083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.260102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.260108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.267715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.142 [2024-11-20 06:43:03.267733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.142 [2024-11-20 06:43:03.267739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.142 [2024-11-20 06:43:03.279110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.279129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.279136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.285507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.285526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.285532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.292745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.292763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.292770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.297489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.297507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.297520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.301952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.301969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.301976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.306374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.306391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.306397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.310781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.310800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.310807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.315424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.315443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.315449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.320018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.320036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.320043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.325343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.325363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.325370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.336749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.336769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.336775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.347819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.347838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.347844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.358732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.358753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.358760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.369741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.369760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.369767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.377802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.377821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.377827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.384861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.384879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.384886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.390972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.390990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.390996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.143 4307.00 IOPS, 538.38 MiB/s [2024-11-20T05:43:03.422Z] [2024-11-20 06:43:03.398536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.398554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.398561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.408432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.408451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.408457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.143 [2024-11-20 06:43:03.416826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.143 [2024-11-20 06:43:03.416845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.143 [2024-11-20 06:43:03.416852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.405 [2024-11-20 06:43:03.426843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.405 [2024-11-20 06:43:03.426862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.405 [2024-11-20 06:43:03.426868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.405 [2024-11-20 06:43:03.432784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.405 [2024-11-20 06:43:03.432802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.405 [2024-11-20 06:43:03.432809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.405 [2024-11-20 06:43:03.441520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.405 [2024-11-20 06:43:03.441538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.405 [2024-11-20 06:43:03.441545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.405 [2024-11-20 06:43:03.444167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.405 [2024-11-20 06:43:03.444185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.405 [2024-11-20 06:43:03.444191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.405 [2024-11-20 06:43:03.448915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.405 [2024-11-20 06:43:03.448933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.405 [2024-11-20 06:43:03.448940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.405 [2024-11-20 06:43:03.453462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.405 [2024-11-20 06:43:03.453480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.405 [2024-11-20 06:43:03.453486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.462207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.462225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.462232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.466678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.466695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.466702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.474968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.474987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.474995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.483129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.483147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.483157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.490490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.490509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.490516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.501328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.501348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.501355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.511711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.511730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.511737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.523127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.523146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.523152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.533438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.533457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.533464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.539739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.539758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.539764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.544307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.544325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.544332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.554758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.554777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.554783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.562097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.562120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.562126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.566776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.566794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.566801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.571338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.571357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.571364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.575332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.575351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.575357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.584783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.584802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.584808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.594304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.594324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.594330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.604652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.604671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.604677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.612791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.612810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.612817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.622246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.622264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.622271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.628289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.628308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.628314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.632813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.632832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.632838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.637239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.637257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.637264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.643549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.643568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.643575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.649034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.649053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.406 [2024-11-20 06:43:03.649059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.406 [2024-11-20 06:43:03.655772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.406 [2024-11-20 06:43:03.655791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.407 [2024-11-20 06:43:03.655799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.407 [2024-11-20 06:43:03.662842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.407 [2024-11-20 06:43:03.662861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.407 [2024-11-20 06:43:03.662867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.407 [2024-11-20 06:43:03.671892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.407 [2024-11-20 06:43:03.671911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.407 [2024-11-20 06:43:03.671917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.407 [2024-11-20 06:43:03.679486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.407 [2024-11-20 06:43:03.679505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.407 [2024-11-20 06:43:03.679515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.686660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.686680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.686686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.696543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.696562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.696568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.702302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.702321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.702328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.706728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.706747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.706754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.711188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.711206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.711213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.716077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.716096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.716103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.720603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.720621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.720628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.724905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.724923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.724929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.729387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.729406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.729412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.738433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.738452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.738458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.748006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.748025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.748032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.759941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.759960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.759966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.669 [2024-11-20 06:43:03.771095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.669 [2024-11-20 06:43:03.771114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.669 [2024-11-20 06:43:03.771120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.782359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.782378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.782384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.790394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.790413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.790419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.799904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.799923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.799929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.810201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.810220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.810230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.822475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.822493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.822500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.834083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.834101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.834108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.845315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.845333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.845339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.851461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.851479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.851486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.857439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.857457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.857463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.862227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.862245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.862251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.869675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.869694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.869701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.875621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.875639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.875645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.880390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.880411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.880418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.887577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.887595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.887601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.894660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.894678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.894684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.899191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.899209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.899215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.906344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.906363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.906369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.912040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.912059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.912065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.921076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.921095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.921101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.929101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.929120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.929126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.934442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.934460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.934466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.670 [2024-11-20 06:43:03.943168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.670 [2024-11-20 06:43:03.943187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.670 [2024-11-20 06:43:03.943194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:03.951553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:03.951571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:03.951578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:03.959190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:03.959208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:03.959214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:03.966593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:03.966612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:03.966619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:03.975176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:03.975195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:03.975201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:03.979691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:03.979709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:03.979715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:03.984246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:03.984265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:03.984272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:03.988769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:03.988787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:03.988793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:03.993146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:03.993171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:03.993180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:03.997542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:03.997562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:03.997568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:04.001980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:04.001999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:04.002005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:04.006488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:04.006507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:04.006513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:04.016169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:04.016187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:04.016194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:04.020594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:04.020612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:04.020618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:04.029300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:04.029319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:04.029326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:04.035235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:04.035253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:04.035260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:04.045450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:04.045469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.933 [2024-11-20 06:43:04.045475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.933 [2024-11-20 06:43:04.052851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.933 [2024-11-20 06:43:04.052872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.052879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.063620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.063639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.063645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.069091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.069108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.069116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.073430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.073448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.073455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.081437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.081455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.081461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.089164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.089182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.089188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.093330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.093348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.093354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.099128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.099146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.099152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.103280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.103299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.103306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.111920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.111939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.111945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.116442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.116459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.116466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.123463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.123481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.123487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.128413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.128431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.128438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.137926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.137944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.137951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.142302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.142320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.142326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.147812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.147829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.147836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.154959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.154977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.154983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.159315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.159332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.159342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.165640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.165658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.165665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.173353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.173371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.173377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.178367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.178385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.178391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.188700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.188719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.188725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.934 [2024-11-20 06:43:04.200965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:43.934 [2024-11-20 06:43:04.200984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.934 [2024-11-20 06:43:04.200990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.196 [2024-11-20 06:43:04.213763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.196 [2024-11-20 06:43:04.213783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.196 [2024-11-20 06:43:04.213789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.196 [2024-11-20 06:43:04.225569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.196 [2024-11-20 06:43:04.225587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.196 [2024-11-20 06:43:04.225594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.196 [2024-11-20 06:43:04.237500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.196 [2024-11-20 06:43:04.237518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.196 [2024-11-20 06:43:04.237524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.196 [2024-11-20 06:43:04.249387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.196 [2024-11-20 06:43:04.249412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.196 [2024-11-20 06:43:04.249418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.196 [2024-11-20 06:43:04.260786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.196 [2024-11-20 06:43:04.260805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.196 [2024-11-20 06:43:04.260812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.196 [2024-11-20 06:43:04.270454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.270471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.270478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.281393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.281411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.281418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.292121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.292138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.292145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.304954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.304972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.304979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.316621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.316639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.316646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.329151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.329174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.329181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.341642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.341660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.341666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.354410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.354429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.354436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.364076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.364094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.364101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.368403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.368421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.368427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.373899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.373918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.373924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.378283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.378301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.378307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.385652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.385670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.385677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.391078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.391096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.391103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.197 [2024-11-20 06:43:04.395450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb02870) 00:32:44.197 [2024-11-20 06:43:04.395468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.197 [2024-11-20 06:43:04.395474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.197 4201.00 IOPS, 525.12 MiB/s 00:32:44.197 Latency(us) 00:32:44.197 [2024-11-20T05:43:04.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.197 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:44.197 nvme0n1 : 2.01 4196.64 524.58 0.00 0.00 3809.75 580.27 12834.13 00:32:44.197 [2024-11-20T05:43:04.476Z] =================================================================================================================== 00:32:44.197 [2024-11-20T05:43:04.476Z] Total : 4196.64 524.58 0.00 0.00 3809.75 580.27 12834.13 00:32:44.197 { 00:32:44.197 "results": [ 00:32:44.197 { 00:32:44.197 "job": "nvme0n1", 00:32:44.197 "core_mask": "0x2", 00:32:44.197 "workload": "randread", 00:32:44.197 "status": "finished", 00:32:44.197 "queue_depth": 16, 00:32:44.197 "io_size": 131072, 00:32:44.197 "runtime": 2.005892, 00:32:44.197 "iops": 4196.636708257473, 00:32:44.197 "mibps": 524.5795885321842, 00:32:44.197 "io_failed": 0, 00:32:44.197 "io_timeout": 0, 00:32:44.197 "avg_latency_us": 3809.7538037538607, 00:32:44.197 "min_latency_us": 580.2666666666667, 00:32:44.197 "max_latency_us": 12834.133333333333 00:32:44.197 } 00:32:44.197 ], 00:32:44.197 "core_count": 1 00:32:44.197 } 00:32:44.197 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:44.197 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:44.197 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:44.197 | .driver_specific 00:32:44.197 | .nvme_error 00:32:44.197 | .status_code 00:32:44.197 | .command_transient_transport_error' 00:32:44.197 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 272 > 0 )) 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3013867 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3013867 ']' 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3013867 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3013867 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3013867' 00:32:44.458 killing process with pid 3013867 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3013867 00:32:44.458 Received shutdown signal, test time was about 2.000000 seconds 00:32:44.458 00:32:44.458 Latency(us) 00:32:44.458 [2024-11-20T05:43:04.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.458 [2024-11-20T05:43:04.737Z] =================================================================================================================== 00:32:44.458 [2024-11-20T05:43:04.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.458 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3013867 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3014552 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3014552 /var/tmp/bperf.sock 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3014552 ']' 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:44.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:44.719 06:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:44.719 [2024-11-20 06:43:04.821786] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:32:44.719 [2024-11-20 06:43:04.821842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014552 ] 00:32:44.719 [2024-11-20 06:43:04.905783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.719 [2024-11-20 06:43:04.934770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.662 06:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:45.662 06:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:32:45.662 06:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.662 06:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.662 06:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:45.662 06:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.662 06:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.662 06:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.662 06:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.662 06:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.923 nvme0n1 00:32:45.923 06:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:45.923 06:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.923 06:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.923 06:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.923 06:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:45.923 06:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:46.185 Running I/O for 2 seconds... 00:32:46.185 [2024-11-20 06:43:06.265455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee4140 00:32:46.185 [2024-11-20 06:43:06.266437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.185 [2024-11-20 06:43:06.266467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:46.185 [2024-11-20 06:43:06.274238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5220 00:32:46.185 [2024-11-20 06:43:06.275242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.185 [2024-11-20 06:43:06.275260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.185 [2024-11-20 06:43:06.282836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:46.185 [2024-11-20 06:43:06.283841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.185 [2024-11-20 06:43:06.283858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.185 [2024-11-20 06:43:06.291445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee73e0 00:32:46.185 [2024-11-20 06:43:06.292427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.185 [2024-11-20 06:43:06.292444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.185 [2024-11-20 06:43:06.300003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eed4e8 00:32:46.185 [2024-11-20 06:43:06.301019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.185 [2024-11-20 06:43:06.301036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.185 [2024-11-20 06:43:06.308574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eee5c8 00:32:46.186 [2024-11-20 06:43:06.309576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.309592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.317117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eef6a8 00:32:46.186 [2024-11-20 06:43:06.318133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.318150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.325638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0788 00:32:46.186 [2024-11-20 06:43:06.326641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.326658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.334272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef1868 00:32:46.186 [2024-11-20 06:43:06.335293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.335313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.342822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef2948 00:32:46.186 [2024-11-20 06:43:06.343832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.343849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.351353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef3a28 00:32:46.186 [2024-11-20 06:43:06.352327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.352343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.359871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef4b08 00:32:46.186 [2024-11-20 06:43:06.360877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.360893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.368386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef5be8 00:32:46.186 [2024-11-20 06:43:06.369376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.369392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.376875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee0a68 00:32:46.186 [2024-11-20 06:43:06.377876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.377892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.385390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee1b48 00:32:46.186 [2024-11-20 06:43:06.386363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.386379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.393879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee2c28 00:32:46.186 [2024-11-20 06:43:06.394883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.394900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.402399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3d08 00:32:46.186 [2024-11-20 06:43:06.403366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.403382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.410897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee4de8 00:32:46.186 [2024-11-20 06:43:06.411924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.411940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.419379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5ec8 00:32:46.186 [2024-11-20 06:43:06.420359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.420375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.427900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6fa8 00:32:46.186 [2024-11-20 06:43:06.428922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.428938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.436403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:46.186 [2024-11-20 06:43:06.437437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.437454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.444917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eee190 00:32:46.186 [2024-11-20 06:43:06.445936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.445952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.186 [2024-11-20 06:43:06.453432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eef270 00:32:46.186 [2024-11-20 06:43:06.454438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.186 [2024-11-20 06:43:06.454454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.461921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0350 00:32:46.449 [2024-11-20 06:43:06.462927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.462943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.470415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef1430 00:32:46.449 [2024-11-20 06:43:06.471417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.471433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.478914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef2510 00:32:46.449 [2024-11-20 06:43:06.479929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.479945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.487410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef35f0 00:32:46.449 [2024-11-20 06:43:06.488368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.488385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.495899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef46d0 00:32:46.449 [2024-11-20 06:43:06.496916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.496932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.504381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef57b0 00:32:46.449 [2024-11-20 06:43:06.505354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.505369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.512868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8d30 00:32:46.449 [2024-11-20 06:43:06.513862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.513877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.521371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee0ea0 00:32:46.449 [2024-11-20 06:43:06.522347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.522363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.529876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee1f80 00:32:46.449 [2024-11-20 06:43:06.530880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.530896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.538361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3060 00:32:46.449 [2024-11-20 06:43:06.539385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.539401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.546861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee4140 00:32:46.449 [2024-11-20 06:43:06.547878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.547894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.555333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5220 00:32:46.449 [2024-11-20 06:43:06.556340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.556358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.563822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:46.449 [2024-11-20 06:43:06.564818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.564835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.572317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee73e0 00:32:46.449 [2024-11-20 06:43:06.573271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.573287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.580816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eed4e8 00:32:46.449 [2024-11-20 06:43:06.581817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.581833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.589325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eee5c8 00:32:46.449 [2024-11-20 06:43:06.590331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.590347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.597820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eef6a8 00:32:46.449 [2024-11-20 06:43:06.598834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.598850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.606314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0788 00:32:46.449 [2024-11-20 06:43:06.607319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.607334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.614804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef1868 00:32:46.449 [2024-11-20 06:43:06.615823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.615839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.623297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef2948 00:32:46.449 [2024-11-20 06:43:06.624310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.624327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.631794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef3a28 00:32:46.449 [2024-11-20 06:43:06.632814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.632830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.640310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef4b08 00:32:46.449 [2024-11-20 06:43:06.641324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.641341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.648790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef5be8 00:32:46.449 [2024-11-20 06:43:06.649846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.449 [2024-11-20 06:43:06.649863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.449 [2024-11-20 06:43:06.657504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee0a68 00:32:46.449 [2024-11-20 06:43:06.658466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.450 [2024-11-20 06:43:06.658482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.450 [2024-11-20 06:43:06.666004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee1b48 00:32:46.450 [2024-11-20 06:43:06.667008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.450 [2024-11-20 06:43:06.667024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.450 [2024-11-20 06:43:06.674495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee2c28 00:32:46.450 [2024-11-20 06:43:06.675496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.450 [2024-11-20 06:43:06.675512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.450 [2024-11-20 06:43:06.682983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3d08 00:32:46.450 [2024-11-20 06:43:06.683942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.450 [2024-11-20 06:43:06.683958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.450 [2024-11-20 06:43:06.691452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee4de8 00:32:46.450 [2024-11-20 06:43:06.692413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.450 [2024-11-20 06:43:06.692429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.450 [2024-11-20 06:43:06.699945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5ec8 00:32:46.450 [2024-11-20 06:43:06.700945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.450 [2024-11-20 06:43:06.700961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.450 [2024-11-20 06:43:06.708453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6fa8 00:32:46.450 [2024-11-20 06:43:06.709483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.450 [2024-11-20 06:43:06.709499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.450 [2024-11-20 06:43:06.716966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:46.450 [2024-11-20 06:43:06.717970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.450 [2024-11-20 06:43:06.717987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.713 [2024-11-20 06:43:06.725470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eee190 00:32:46.713 [2024-11-20 06:43:06.726463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.713 [2024-11-20 06:43:06.726480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.733970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eef270 00:32:46.714 [2024-11-20 06:43:06.734932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.734948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.742450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0350 00:32:46.714 [2024-11-20 06:43:06.743440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.743457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.750957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef1430 00:32:46.714 [2024-11-20 06:43:06.751969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.751985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.759456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef2510 00:32:46.714 [2024-11-20 06:43:06.760435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.760451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.767934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef35f0 00:32:46.714 [2024-11-20 06:43:06.768946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.768963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.776453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef46d0 00:32:46.714 [2024-11-20 06:43:06.777499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.777519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.784972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef57b0 00:32:46.714 [2024-11-20 06:43:06.785995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.786012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.793468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8d30 00:32:46.714 [2024-11-20 06:43:06.794484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.794501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.801969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee0ea0 00:32:46.714 [2024-11-20 06:43:06.802975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.802992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.810467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee1f80 00:32:46.714 [2024-11-20 06:43:06.811466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.811482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.818971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3060 00:32:46.714 [2024-11-20 06:43:06.819931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.819948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.827469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee4140 00:32:46.714 [2024-11-20 06:43:06.828463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.828481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.835979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5220 00:32:46.714 [2024-11-20 06:43:06.836991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.837008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.844533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:46.714 [2024-11-20 06:43:06.845545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.845561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.853056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee73e0 00:32:46.714 [2024-11-20 06:43:06.854053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.854069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.861557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eed4e8 00:32:46.714 [2024-11-20 06:43:06.862558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.862575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.870050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eee5c8 00:32:46.714 [2024-11-20 06:43:06.871050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.871067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.878530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eef6a8 00:32:46.714 [2024-11-20 06:43:06.879536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.879552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.887031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0788 00:32:46.714 [2024-11-20 06:43:06.888036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.888052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.895584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef1868 00:32:46.714 [2024-11-20 06:43:06.896600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.896616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.904085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef2948 00:32:46.714 [2024-11-20 06:43:06.905044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.905060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.912574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef3a28 00:32:46.714 [2024-11-20 06:43:06.913576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.913593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.921052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef4b08 00:32:46.714 [2024-11-20 06:43:06.922013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.922029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.929545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef5be8 00:32:46.714 [2024-11-20 06:43:06.930544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.930561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.938057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee0a68 00:32:46.714 [2024-11-20 06:43:06.939057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.939073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.946566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee1b48 00:32:46.714 [2024-11-20 06:43:06.947590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.714 [2024-11-20 06:43:06.947607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.714 [2024-11-20 06:43:06.955065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee2c28 00:32:46.714 [2024-11-20 06:43:06.956068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.715 [2024-11-20 06:43:06.956084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.715 [2024-11-20 06:43:06.963580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3d08 00:32:46.715 [2024-11-20 06:43:06.964581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.715 [2024-11-20 06:43:06.964597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.715 [2024-11-20 06:43:06.972058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee4de8 00:32:46.715 [2024-11-20 06:43:06.973061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.715 [2024-11-20 06:43:06.973077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.715 [2024-11-20 06:43:06.980568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5ec8 00:32:46.715 [2024-11-20 06:43:06.981531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.715 [2024-11-20 06:43:06.981547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.715 [2024-11-20 06:43:06.989075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6fa8 00:32:46.976 [2024-11-20 06:43:06.990081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.976 [2024-11-20 06:43:06.990098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.976 [2024-11-20 06:43:06.997580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:46.976 [2024-11-20 06:43:06.998577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.976 [2024-11-20 06:43:06.998595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.976 [2024-11-20 06:43:07.006066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eee190 00:32:46.976 [2024-11-20 06:43:07.007073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.976 [2024-11-20 06:43:07.007089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.976 [2024-11-20 06:43:07.014560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eef270 00:32:46.976 [2024-11-20 06:43:07.015561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.976 [2024-11-20 06:43:07.015577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.976 [2024-11-20 06:43:07.023040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0350 00:32:46.976 [2024-11-20 06:43:07.024055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.976 [2024-11-20 06:43:07.024071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.976 [2024-11-20 06:43:07.031579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef1430 00:32:46.976 [2024-11-20 06:43:07.032591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.976 [2024-11-20 06:43:07.032607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.976 [2024-11-20 06:43:07.040083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef2510 00:32:46.976 [2024-11-20 06:43:07.041085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.041102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.048592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef35f0 00:32:46.977 [2024-11-20 06:43:07.049592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.049609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.057079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef46d0 00:32:46.977 [2024-11-20 06:43:07.058095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.058111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.065577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef57b0 00:32:46.977 [2024-11-20 06:43:07.066603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.066620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.074087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8d30 00:32:46.977 [2024-11-20 06:43:07.075104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.075120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.082603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee0ea0 00:32:46.977 [2024-11-20 06:43:07.083616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.083632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.091107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee1f80 00:32:46.977 [2024-11-20 06:43:07.092114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.092130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.099607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3060 00:32:46.977 [2024-11-20 06:43:07.100621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.100637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.108101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee4140 00:32:46.977 [2024-11-20 06:43:07.109110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.109126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.116594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5220 00:32:46.977 [2024-11-20 06:43:07.117475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.117491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.126172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:46.977 [2024-11-20 06:43:07.127495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.127511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.132249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:46.977 [2024-11-20 06:43:07.132870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.132886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.142637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef6458 00:32:46.977 [2024-11-20 06:43:07.143873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.143890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.150557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef4b08 00:32:46.977 [2024-11-20 06:43:07.151470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.151486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.158970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef5be8 00:32:46.977 [2024-11-20 06:43:07.159878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.159894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.167472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee0a68 00:32:46.977 [2024-11-20 06:43:07.168378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.168394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.176295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016edece0 00:32:46.977 [2024-11-20 06:43:07.177046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.177063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.183982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee88f8 00:32:46.977 [2024-11-20 06:43:07.184879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.184895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.192905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5220 00:32:46.977 [2024-11-20 06:43:07.193799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.193815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.201678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6b70 00:32:46.977 [2024-11-20 06:43:07.202563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.202579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.210185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef8a50 00:32:46.977 [2024-11-20 06:43:07.211065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.211081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.218702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee84c0 00:32:46.977 [2024-11-20 06:43:07.219568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.219587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.227210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eeaef0 00:32:46.977 [2024-11-20 06:43:07.228065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.228081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.235706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3498 00:32:46.977 [2024-11-20 06:43:07.236589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.236605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:46.977 [2024-11-20 06:43:07.244209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef46d0 00:32:46.977 [2024-11-20 06:43:07.245091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.977 [2024-11-20 06:43:07.245107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.252702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef7538 00:32:47.239 [2024-11-20 06:43:07.253584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.253601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:47.239 29917.00 IOPS, 116.86 MiB/s [2024-11-20T05:43:07.518Z] [2024-11-20 06:43:07.261218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef7da8 00:32:47.239 [2024-11-20 06:43:07.262103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.262119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.269735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee0a68 00:32:47.239 [2024-11-20 06:43:07.270604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.270621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.278248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee99d8 00:32:47.239 [2024-11-20 06:43:07.279076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.279093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.286756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee1b48 00:32:47.239 [2024-11-20 06:43:07.287624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.287640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.295242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eeb760 00:32:47.239 [2024-11-20 06:43:07.296132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.296148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.304014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016edfdc0 00:32:47.239 [2024-11-20 06:43:07.304655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.304672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.312695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee0a68 00:32:47.239 [2024-11-20 06:43:07.313716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.313732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.321196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eee190 00:32:47.239 [2024-11-20 06:43:07.322170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.322186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.329694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee99d8 00:32:47.239 [2024-11-20 06:43:07.330668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.330685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.338276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5ec8 00:32:47.239 [2024-11-20 06:43:07.339256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.339272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.346765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee23b8 00:32:47.239 [2024-11-20 06:43:07.347769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.347785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.355286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0bc0 00:32:47.239 [2024-11-20 06:43:07.356282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.356299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.363801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef1ca0 00:32:47.239 [2024-11-20 06:43:07.364805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.364822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.372299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016edf988 00:32:47.239 [2024-11-20 06:43:07.373313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.373329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.380791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eff3c8 00:32:47.239 [2024-11-20 06:43:07.381813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.381829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.389267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016efda78 00:32:47.239 [2024-11-20 06:43:07.390137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.390153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.397274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6b70 00:32:47.239 [2024-11-20 06:43:07.398119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.398136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.406425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef7da8 00:32:47.239 [2024-11-20 06:43:07.407481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.407498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.414960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eeaef0 00:32:47.239 [2024-11-20 06:43:07.416027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.416043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.423492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee88f8 00:32:47.239 [2024-11-20 06:43:07.424543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.239 [2024-11-20 06:43:07.424560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.239 [2024-11-20 06:43:07.432023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee7c50 00:32:47.240 [2024-11-20 06:43:07.433034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.240 [2024-11-20 06:43:07.433051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.240 [2024-11-20 06:43:07.440525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eec840 00:32:47.240 [2024-11-20 06:43:07.441578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.240 [2024-11-20 06:43:07.441597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.240 [2024-11-20 06:43:07.449071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee23b8 00:32:47.240 [2024-11-20 06:43:07.450136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.240 [2024-11-20 06:43:07.450153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.240 [2024-11-20 06:43:07.457595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef7da8 00:32:47.240 [2024-11-20 06:43:07.458646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.240 [2024-11-20 06:43:07.458662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.240 [2024-11-20 06:43:07.466443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef5be8 00:32:47.240 [2024-11-20 06:43:07.467604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.240 [2024-11-20 06:43:07.467620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:47.240 [2024-11-20 06:43:07.474937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016efa3a0 00:32:47.240 [2024-11-20 06:43:07.476062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.240 [2024-11-20 06:43:07.476079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:47.240 [2024-11-20 06:43:07.481876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee1f80 00:32:47.240 [2024-11-20 06:43:07.482574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.240 [2024-11-20 06:43:07.482590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:47.240 [2024-11-20 06:43:07.490368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eed0b0 00:32:47.240 [2024-11-20 06:43:07.490940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.240 [2024-11-20 06:43:07.490956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:47.240 [2024-11-20 06:43:07.499167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016efb048 00:32:47.240 [2024-11-20 06:43:07.499971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.240 [2024-11-20 06:43:07.499987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:47.240 [2024-11-20 06:43:07.507814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef1ca0 00:32:47.240 [2024-11-20 06:43:07.508632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.240 [2024-11-20 06:43:07.508649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.516318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016edf988 00:32:47.503 [2024-11-20 06:43:07.517138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.517155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.524824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016efd640 00:32:47.503 [2024-11-20 06:43:07.525640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.525657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.533321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016efeb58 00:32:47.503 [2024-11-20 06:43:07.534144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.534164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.541819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016efe720 00:32:47.503 [2024-11-20 06:43:07.542641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.542658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.550362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eecc78 00:32:47.503 [2024-11-20 06:43:07.551175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.551191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.558844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef8e88 00:32:47.503 [2024-11-20 06:43:07.559660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.559676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.567327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef6cc8 00:32:47.503 [2024-11-20 06:43:07.568109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.568125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.575789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef3e60 00:32:47.503 [2024-11-20 06:43:07.576600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.576617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.584279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6fa8 00:32:47.503 [2024-11-20 06:43:07.585065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.585081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.592774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee23b8 00:32:47.503 [2024-11-20 06:43:07.593588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.593605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.601267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5ec8 00:32:47.503 [2024-11-20 06:43:07.602083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.602099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.609763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee99d8 00:32:47.503 [2024-11-20 06:43:07.610595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.610612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.618256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:47.503 [2024-11-20 06:43:07.619087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.619103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.626727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3498 00:32:47.503 [2024-11-20 06:43:07.627559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.627576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.635244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016efac10 00:32:47.503 [2024-11-20 06:43:07.635934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.635950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.644030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5220 00:32:47.503 [2024-11-20 06:43:07.644959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.644976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.651854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:47.503 [2024-11-20 06:43:07.652646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.652662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.660771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5220 00:32:47.503 [2024-11-20 06:43:07.661579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.661598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.669267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3d08 00:32:47.503 [2024-11-20 06:43:07.670075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.670092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.677945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee5220 00:32:47.503 [2024-11-20 06:43:07.678752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.678768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.686452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3d08 00:32:47.503 [2024-11-20 06:43:07.687266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.687283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.694299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef31b8 00:32:47.503 [2024-11-20 06:43:07.695005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.695021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.703477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0bc0 00:32:47.503 [2024-11-20 06:43:07.704265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.704282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.711952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee73e0 00:32:47.503 [2024-11-20 06:43:07.712768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.712785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.720455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6738 00:32:47.503 [2024-11-20 06:43:07.721224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.503 [2024-11-20 06:43:07.721240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.503 [2024-11-20 06:43:07.728972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0bc0 00:32:47.503 [2024-11-20 06:43:07.729746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.504 [2024-11-20 06:43:07.729763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.504 [2024-11-20 06:43:07.738168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0bc0 00:32:47.504 [2024-11-20 06:43:07.739060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.504 [2024-11-20 06:43:07.739076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.504 [2024-11-20 06:43:07.746614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eefae0 00:32:47.504 [2024-11-20 06:43:07.747454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.504 [2024-11-20 06:43:07.747470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.504 [2024-11-20 06:43:07.755104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef8a50 00:32:47.504 [2024-11-20 06:43:07.755970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.504 [2024-11-20 06:43:07.755986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.504 [2024-11-20 06:43:07.763597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef7538 00:32:47.504 [2024-11-20 06:43:07.764485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.504 [2024-11-20 06:43:07.764501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.504 [2024-11-20 06:43:07.772114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3d08 00:32:47.504 [2024-11-20 06:43:07.772993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.504 [2024-11-20 06:43:07.773010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.780626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee27f0 00:32:47.778 [2024-11-20 06:43:07.781490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.781506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.789134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0bc0 00:32:47.778 [2024-11-20 06:43:07.790003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.790019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.797641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eefae0 00:32:47.778 [2024-11-20 06:43:07.798523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.798539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.806135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef8a50 00:32:47.778 [2024-11-20 06:43:07.807007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.807023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.814653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef7538 00:32:47.778 [2024-11-20 06:43:07.815522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.815539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.823179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3d08 00:32:47.778 [2024-11-20 06:43:07.824067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.824082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.831688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee27f0 00:32:47.778 [2024-11-20 06:43:07.832565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.832581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.840184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0bc0 00:32:47.778 [2024-11-20 06:43:07.841079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.841094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.848692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eefae0 00:32:47.778 [2024-11-20 06:43:07.849571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.849587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.857188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef8a50 00:32:47.778 [2024-11-20 06:43:07.858059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.858075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.865709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef7538 00:32:47.778 [2024-11-20 06:43:07.866591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.866607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.874213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3d08 00:32:47.778 [2024-11-20 06:43:07.875087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.875102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.778 [2024-11-20 06:43:07.882722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee27f0 00:32:47.778 [2024-11-20 06:43:07.883608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.778 [2024-11-20 06:43:07.883630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.891252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0bc0 00:32:47.779 [2024-11-20 06:43:07.892120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.892136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.899729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eefae0 00:32:47.779 [2024-11-20 06:43:07.900601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.900616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.908242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef8a50 00:32:47.779 [2024-11-20 06:43:07.909075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.909091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.916744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef7538 00:32:47.779 [2024-11-20 06:43:07.917588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.917604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.925238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3d08 00:32:47.779 [2024-11-20 06:43:07.926071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.926087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.933726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee27f0 00:32:47.779 [2024-11-20 06:43:07.934608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.934625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.942221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0bc0 00:32:47.779 [2024-11-20 06:43:07.943079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.943095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.950718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eefae0 00:32:47.779 [2024-11-20 06:43:07.951591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.951607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.959247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef8a50 00:32:47.779 [2024-11-20 06:43:07.960118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.960134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.967739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef7538 00:32:47.779 [2024-11-20 06:43:07.968618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.968635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.976257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee3d08 00:32:47.779 [2024-11-20 06:43:07.977131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.977147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.984738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee27f0 00:32:47.779 [2024-11-20 06:43:07.985574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.985590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:07.993224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ef0bc0 00:32:47.779 [2024-11-20 06:43:07.994106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:07.994122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:08.001743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eefae0 00:32:47.779 [2024-11-20 06:43:08.002616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:08.002633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:08.009626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eea680 00:32:47.779 [2024-11-20 06:43:08.010525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:08.010541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:08.018979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee73e0 00:32:47.779 [2024-11-20 06:43:08.019995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:08.020011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:08.026435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:47.779 [2024-11-20 06:43:08.027132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:08.027148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:08.035003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eea680 00:32:47.779 [2024-11-20 06:43:08.035749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:08.035765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:47.779 [2024-11-20 06:43:08.043517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eeff18 00:32:47.779 [2024-11-20 06:43:08.044228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.779 [2024-11-20 06:43:08.044244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.052034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:48.041 [2024-11-20 06:43:08.052743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.052760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.060553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:48.041 [2024-11-20 06:43:08.061293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.061309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.069043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eea680 00:32:48.041 [2024-11-20 06:43:08.069785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.069801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.077524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eeff18 00:32:48.041 [2024-11-20 06:43:08.078224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.078240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.086007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:48.041 [2024-11-20 06:43:08.086748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.086763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.094509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:48.041 [2024-11-20 06:43:08.095205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.095221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.103025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eea680 00:32:48.041 [2024-11-20 06:43:08.103770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.103788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.111531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eeff18 00:32:48.041 [2024-11-20 06:43:08.112272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.112288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.120022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:48.041 [2024-11-20 06:43:08.120764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.120780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.128504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:48.041 [2024-11-20 06:43:08.129219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.129235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.137007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eea680 00:32:48.041 [2024-11-20 06:43:08.137765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.137781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.145569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eeff18 00:32:48.041 [2024-11-20 06:43:08.146307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.146323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.154056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:48.041 [2024-11-20 06:43:08.154804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.154820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.162541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:48.041 [2024-11-20 06:43:08.163283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.163300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.171021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eea680 00:32:48.041 [2024-11-20 06:43:08.171783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.171799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.179491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eeff18 00:32:48.041 [2024-11-20 06:43:08.180229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.180248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.188005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:48.041 [2024-11-20 06:43:08.188750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.188767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.196513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:48.041 [2024-11-20 06:43:08.197226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.197242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.205024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eea680 00:32:48.041 [2024-11-20 06:43:08.205769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.205784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.213525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eeff18 00:32:48.041 [2024-11-20 06:43:08.214221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.041 [2024-11-20 06:43:08.214237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.041 [2024-11-20 06:43:08.222002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:48.041 [2024-11-20 06:43:08.222750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.042 [2024-11-20 06:43:08.222765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.042 [2024-11-20 06:43:08.230498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee8088 00:32:48.042 [2024-11-20 06:43:08.231220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.042 [2024-11-20 06:43:08.231236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.042 [2024-11-20 06:43:08.239011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eea680 00:32:48.042 [2024-11-20 06:43:08.239755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.042 [2024-11-20 06:43:08.239771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.042 [2024-11-20 06:43:08.247521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016eeff18 00:32:48.042 [2024-11-20 06:43:08.248228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.042 [2024-11-20 06:43:08.248244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.042 [2024-11-20 06:43:08.256017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63520) with pdu=0x200016ee6300 00:32:48.042 [2024-11-20 06:43:08.256768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.042 [2024-11-20 06:43:08.256784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:48.042 29988.00 IOPS, 117.14 MiB/s 00:32:48.042 Latency(us) 00:32:48.042 [2024-11-20T05:43:08.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.042 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:48.042 nvme0n1 : 2.00 29997.63 117.18 0.00 0.00 4261.54 2075.31 10922.67 00:32:48.042 [2024-11-20T05:43:08.321Z] =================================================================================================================== 00:32:48.042 [2024-11-20T05:43:08.321Z] Total : 29997.63 117.18 0.00 0.00 4261.54 2075.31 10922.67 00:32:48.042 { 00:32:48.042 "results": [ 00:32:48.042 { 00:32:48.042 "job": "nvme0n1", 00:32:48.042 "core_mask": "0x2", 00:32:48.042 "workload": "randwrite", 00:32:48.042 "status": "finished", 00:32:48.042 "queue_depth": 128, 00:32:48.042 "io_size": 4096, 00:32:48.042 "runtime": 2.003625, 00:32:48.042 "iops": 29997.62929689937, 00:32:48.042 "mibps": 117.17823944101316, 00:32:48.042 "io_failed": 0, 00:32:48.042 "io_timeout": 0, 00:32:48.042 "avg_latency_us": 4261.54405430587, 00:32:48.042 "min_latency_us": 2075.306666666667, 00:32:48.042 "max_latency_us": 10922.666666666666 00:32:48.042 } 00:32:48.042 ], 00:32:48.042 "core_count": 1 00:32:48.042 } 00:32:48.042 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:48.042 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:48.042 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:48.042 | .driver_specific 00:32:48.042 | .nvme_error 00:32:48.042 | .status_code 00:32:48.042 | .command_transient_transport_error' 00:32:48.042 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 235 > 0 )) 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3014552 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3014552 ']' 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3014552 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3014552 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3014552' 00:32:48.302 killing process with pid 3014552 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3014552 00:32:48.302 Received shutdown signal, test time was about 2.000000 seconds 00:32:48.302 00:32:48.302 Latency(us) 00:32:48.302 [2024-11-20T05:43:08.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.302 [2024-11-20T05:43:08.581Z] =================================================================================================================== 00:32:48.302 [2024-11-20T05:43:08.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:48.302 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3014552 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3015262 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3015262 /var/tmp/bperf.sock 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3015262 ']' 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:48.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:48.563 06:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:48.563 [2024-11-20 06:43:08.683455] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:32:48.563 [2024-11-20 06:43:08.683511] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015262 ] 00:32:48.563 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:48.563 Zero copy mechanism will not be used. 00:32:48.563 [2024-11-20 06:43:08.768453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.563 [2024-11-20 06:43:08.798091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.505 06:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:49.505 06:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:32:49.505 06:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:49.505 06:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:49.505 06:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:49.505 06:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.505 06:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:49.505 06:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.505 06:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.505 06:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.766 nvme0n1 00:32:49.766 06:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:49.766 06:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.766 06:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:49.767 06:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.767 06:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:49.767 06:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:50.027 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:50.027 Zero copy mechanism will not be used. 00:32:50.027 Running I/O for 2 seconds... 00:32:50.027 [2024-11-20 06:43:10.112524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.027 [2024-11-20 06:43:10.112795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.112820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.121467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.121692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.121711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.129573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.129635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.129652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.136248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.136331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.136347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.143234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.143484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.143501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.150138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.150202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.150219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.159687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.159755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.159776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.166543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.166792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.166809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.177775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.178070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.178089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.188245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.188482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.188498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.199663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.199901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.199917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.210234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.210521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.210538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.219808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.219858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.219874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.228667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.228964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.228981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.237051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.237110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.237126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.242371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.242437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.242453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.252472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.252521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.252536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.262521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.262811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.262828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.272253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.272553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.272570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.282991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.283194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.283211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.028 [2024-11-20 06:43:10.294225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.028 [2024-11-20 06:43:10.294496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.028 [2024-11-20 06:43:10.294513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.290 [2024-11-20 06:43:10.305869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.290 [2024-11-20 06:43:10.306190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.290 [2024-11-20 06:43:10.306207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.290 [2024-11-20 06:43:10.317279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.290 [2024-11-20 06:43:10.317540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.290 [2024-11-20 06:43:10.317556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.290 [2024-11-20 06:43:10.328726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.290 [2024-11-20 06:43:10.329017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.290 [2024-11-20 06:43:10.329034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.290 [2024-11-20 06:43:10.339155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.290 [2024-11-20 06:43:10.339446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.290 [2024-11-20 06:43:10.339463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.290 [2024-11-20 06:43:10.348997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.290 [2024-11-20 06:43:10.349241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.290 [2024-11-20 06:43:10.349257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.359448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.359510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.359526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.366249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.366341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.366357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.373318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.373376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.373392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.378473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.378757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.378774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.385402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.385677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.385693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.396066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.396126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.396142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.403183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.403292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.403310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.411016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.411321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.411337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.421276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.421520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.421536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.426029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.426300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.426317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.432633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.432687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.432702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.437860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.437916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.437930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.443927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.444272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.444289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.450226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.450296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.450311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.455809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.456055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.456072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.462972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.463033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.463049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.473460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.473737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.473754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.482242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.482520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.482536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.489540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.489593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.489608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.499099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.499168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.499184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.507662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.507890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.507906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.516450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.516718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.516735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.523066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.523118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.523134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.529332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.529404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.529420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.534469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.534522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.534537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.541436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.541489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.541505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.291 [2024-11-20 06:43:10.549193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.291 [2024-11-20 06:43:10.549284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.291 [2024-11-20 06:43:10.549300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.292 [2024-11-20 06:43:10.554489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.292 [2024-11-20 06:43:10.554593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.292 [2024-11-20 06:43:10.554609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.292 [2024-11-20 06:43:10.558857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.292 [2024-11-20 06:43:10.558987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.292 [2024-11-20 06:43:10.559003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.574 [2024-11-20 06:43:10.566225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.574 [2024-11-20 06:43:10.566400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.574 [2024-11-20 06:43:10.566417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.574 [2024-11-20 06:43:10.575406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.574 [2024-11-20 06:43:10.575661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.574 [2024-11-20 06:43:10.575686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.574 [2024-11-20 06:43:10.586686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.574 [2024-11-20 06:43:10.586943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.574 [2024-11-20 06:43:10.586960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.574 [2024-11-20 06:43:10.598290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.574 [2024-11-20 06:43:10.598606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.574 [2024-11-20 06:43:10.598625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.574 [2024-11-20 06:43:10.605580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.575 [2024-11-20 06:43:10.605663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.575 [2024-11-20 06:43:10.605678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.575 [2024-11-20 06:43:10.614422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.575 [2024-11-20 06:43:10.614478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.575 [2024-11-20 06:43:10.614493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.575 [2024-11-20 06:43:10.622939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.575 [2024-11-20 06:43:10.623008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.575 [2024-11-20 06:43:10.623024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.575 [2024-11-20 06:43:10.630404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.575 [2024-11-20 06:43:10.630664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.575 [2024-11-20 06:43:10.630680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.575 [2024-11-20 06:43:10.640048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.575 [2024-11-20 06:43:10.640338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.575 [2024-11-20 06:43:10.640356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.575 [2024-11-20 06:43:10.648647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.575 [2024-11-20 06:43:10.648907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.575 [2024-11-20 06:43:10.648924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.575 [2024-11-20 06:43:10.657876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.575 [2024-11-20 06:43:10.658097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.575 [2024-11-20 06:43:10.658113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.576 [2024-11-20 06:43:10.666649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.576 [2024-11-20 06:43:10.666890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.576 [2024-11-20 06:43:10.666907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.576 [2024-11-20 06:43:10.674795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.576 [2024-11-20 06:43:10.674862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.576 [2024-11-20 06:43:10.674877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.576 [2024-11-20 06:43:10.683754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.576 [2024-11-20 06:43:10.683802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.576 [2024-11-20 06:43:10.683817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.576 [2024-11-20 06:43:10.692463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.576 [2024-11-20 06:43:10.692751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.576 [2024-11-20 06:43:10.692768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.576 [2024-11-20 06:43:10.699141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.576 [2024-11-20 06:43:10.699220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.576 [2024-11-20 06:43:10.699236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.576 [2024-11-20 06:43:10.703949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.577 [2024-11-20 06:43:10.704187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.577 [2024-11-20 06:43:10.704203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.577 [2024-11-20 06:43:10.712173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.577 [2024-11-20 06:43:10.712229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.577 [2024-11-20 06:43:10.712245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.577 [2024-11-20 06:43:10.718150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.577 [2024-11-20 06:43:10.718237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.577 [2024-11-20 06:43:10.718252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.577 [2024-11-20 06:43:10.726147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.577 [2024-11-20 06:43:10.726227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.577 [2024-11-20 06:43:10.726242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.577 [2024-11-20 06:43:10.735287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.581 [2024-11-20 06:43:10.735348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.581 [2024-11-20 06:43:10.735363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.581 [2024-11-20 06:43:10.746746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.581 [2024-11-20 06:43:10.746797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.581 [2024-11-20 06:43:10.746813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.581 [2024-11-20 06:43:10.754812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.581 [2024-11-20 06:43:10.754868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.581 [2024-11-20 06:43:10.754884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.581 [2024-11-20 06:43:10.764149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.581 [2024-11-20 06:43:10.764440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.581 [2024-11-20 06:43:10.764457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.581 [2024-11-20 06:43:10.771341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.581 [2024-11-20 06:43:10.771633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.581 [2024-11-20 06:43:10.771651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.581 [2024-11-20 06:43:10.779580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.582 [2024-11-20 06:43:10.779631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.582 [2024-11-20 06:43:10.779647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.582 [2024-11-20 06:43:10.783724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.582 [2024-11-20 06:43:10.783781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.582 [2024-11-20 06:43:10.783797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.582 [2024-11-20 06:43:10.790608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.582 [2024-11-20 06:43:10.790685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.582 [2024-11-20 06:43:10.790701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.582 [2024-11-20 06:43:10.799182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.582 [2024-11-20 06:43:10.799473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.582 [2024-11-20 06:43:10.799491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.582 [2024-11-20 06:43:10.807276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.582 [2024-11-20 06:43:10.807340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.582 [2024-11-20 06:43:10.807358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.582 [2024-11-20 06:43:10.815817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.582 [2024-11-20 06:43:10.816102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.582 [2024-11-20 06:43:10.816118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.582 [2024-11-20 06:43:10.822468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.582 [2024-11-20 06:43:10.822530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.582 [2024-11-20 06:43:10.822545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.582 [2024-11-20 06:43:10.833480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.582 [2024-11-20 06:43:10.833769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.583 [2024-11-20 06:43:10.833788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.583 [2024-11-20 06:43:10.841849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.583 [2024-11-20 06:43:10.842169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.583 [2024-11-20 06:43:10.842186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.850 [2024-11-20 06:43:10.847703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.850 [2024-11-20 06:43:10.847769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.850 [2024-11-20 06:43:10.847785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.850 [2024-11-20 06:43:10.856314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.850 [2024-11-20 06:43:10.856368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.850 [2024-11-20 06:43:10.856383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.850 [2024-11-20 06:43:10.866141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.850 [2024-11-20 06:43:10.866195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.850 [2024-11-20 06:43:10.866211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.850 [2024-11-20 06:43:10.871317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.850 [2024-11-20 06:43:10.871615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.850 [2024-11-20 06:43:10.871631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.850 [2024-11-20 06:43:10.880285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.850 [2024-11-20 06:43:10.880372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.850 [2024-11-20 06:43:10.880388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.850 [2024-11-20 06:43:10.887970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.850 [2024-11-20 06:43:10.888052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.850 [2024-11-20 06:43:10.888068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.850 [2024-11-20 06:43:10.892316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.850 [2024-11-20 06:43:10.892379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.850 [2024-11-20 06:43:10.892394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.850 [2024-11-20 06:43:10.896707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.850 [2024-11-20 06:43:10.896792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.896807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.900891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.900947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.900963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.905396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.905666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.905684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.912313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.912602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.912619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.917576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.917627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.917643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.921950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.922050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.922066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.927843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.927886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.927901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.937242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.937302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.937317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.944559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.944632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.944647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.948856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.948917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.948933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.952913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.952961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.952976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.959681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.959736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.959753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.965302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.965375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.965391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.969653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.969700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.969716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.975227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.975533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.975553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.982005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.982073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.982089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.985884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.985929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.985944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.993881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.994189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.994206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:10.998401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:10.998651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:10.998668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:11.002813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:11.002859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:11.002875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:11.009280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:11.009597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:11.009613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:11.018776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:11.018852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:11.018868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:11.027293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:11.027531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:11.027548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:11.038509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:11.038824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:11.038840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:11.047914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:11.048202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:11.048219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:11.056138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:11.056250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:11.056266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.851 [2024-11-20 06:43:11.060594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.851 [2024-11-20 06:43:11.060674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.851 [2024-11-20 06:43:11.060690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.064513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.064567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.064583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.068743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.068801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.068816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.072635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.072700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.072715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.076738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.076821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.076837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.080720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.080805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.080820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.084331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.084413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.084428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.087685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.087740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.087756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.091779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.091823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.091839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.095641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.095702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.095717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.099172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.099251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.099267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.103682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.103734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.103750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.852 4132.00 IOPS, 516.50 MiB/s [2024-11-20T05:43:11.131Z] [2024-11-20 06:43:11.110664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.110712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.110728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.114014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.114058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.114073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.117853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.117901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.117919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.852 [2024-11-20 06:43:11.121662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:50.852 [2024-11-20 06:43:11.121719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.852 [2024-11-20 06:43:11.121735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.115 [2024-11-20 06:43:11.127287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.115 [2024-11-20 06:43:11.127344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.115 [2024-11-20 06:43:11.127360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.115 [2024-11-20 06:43:11.131900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.115 [2024-11-20 06:43:11.132189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.115 [2024-11-20 06:43:11.132205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.115 [2024-11-20 06:43:11.139823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.115 [2024-11-20 06:43:11.139872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.115 [2024-11-20 06:43:11.139887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.115 [2024-11-20 06:43:11.143862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.115 [2024-11-20 06:43:11.143910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.115 [2024-11-20 06:43:11.143925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.115 [2024-11-20 06:43:11.151075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.115 [2024-11-20 06:43:11.151332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.115 [2024-11-20 06:43:11.151349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.115 [2024-11-20 06:43:11.156715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.156773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.156789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.160521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.160587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.160603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.164409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.164708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.164724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.168547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.168621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.168636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.175758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.176075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.176091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.182145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.182488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.182506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.187218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.187416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.187432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.191050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.191254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.191270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.195116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.195320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.195336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.198620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.198831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.198847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.204272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.204472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.204488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.211164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.211436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.211454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.216119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.216321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.216337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.219971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.220178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.220194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.226792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.227102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.227119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.231147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.231347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.231363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.238549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.238740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.238757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.245105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.245426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.245443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.253314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.253663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.253680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.257245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.257435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.257454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.261093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.261285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.261302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.267763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.267958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.267974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.274620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.274808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.274824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.278407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.278595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.278610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.283356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.283690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.283707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.287354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.287546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.287562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.116 [2024-11-20 06:43:11.294042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.116 [2024-11-20 06:43:11.294295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.116 [2024-11-20 06:43:11.294312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.302722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.302924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.302940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.310484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.310823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.310841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.315769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.315958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.315975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.320208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.320400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.320416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.324694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.324884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.324901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.329743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.329929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.329945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.336252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.336442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.336459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.344325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.344666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.344683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.349511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.349710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.349727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.355205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.355395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.355412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.360120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.360314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.360331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.364315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.364506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.364523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.370395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.370582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.370598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.376015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.376211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.376227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.380351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.380544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.380560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.117 [2024-11-20 06:43:11.384334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.117 [2024-11-20 06:43:11.384520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.117 [2024-11-20 06:43:11.384537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.389503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.389828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.389846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.393919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.394108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.394125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.397886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.398075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.398098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.401728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.401932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.401949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.405546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.405735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.405752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.411250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.411439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.411456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.414834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.415024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.415041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.418448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.418638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.418656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.422058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.422253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.422270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.425699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.425896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.425912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.429239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.429428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.429445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.432898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.433094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.433110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.436787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.437118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.437136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.380 [2024-11-20 06:43:11.441312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.380 [2024-11-20 06:43:11.441660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.380 [2024-11-20 06:43:11.441678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.446014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.446208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.446225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.449667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.449857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.449874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.452835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.453023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.453040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.456016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.456210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.456227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.459533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.459719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.459736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.462853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.463042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.463058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.466275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.466463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.466480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.470100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.470292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.470309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.474085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.474278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.474295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.477776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.477964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.477980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.481321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.481508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.481525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.484968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.485157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.485178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.488566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.488754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.488771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.491802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.491993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.492009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.497050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.497243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.497263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.500801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.501078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.501096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.505064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.505369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.505386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.509544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.509857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.509875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.514024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.514310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.514327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.521037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.521233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.521250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.524525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.524715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.524731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.531647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.531971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.531989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.536116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.536311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.381 [2024-11-20 06:43:11.536327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.381 [2024-11-20 06:43:11.540122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.381 [2024-11-20 06:43:11.540317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.540334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.543956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.544149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.544172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.547763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.547950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.547967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.552772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.553065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.553083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.557235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.557426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.557443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.561026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.561219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.561236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.564278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.564470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.564487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.567481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.567671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.567687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.570725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.570914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.570930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.573951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.574140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.574156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.577633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.577821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.577838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.582766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.582956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.582972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.586239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.586428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.586445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.589575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.589763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.589779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.594199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.594386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.594402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.597484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.597673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.597690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.600699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.600888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.600905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.604449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.604635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.604655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.608301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.608490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.608506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.611513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.611701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.611717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.615108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.615299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.615315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.619005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.619200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.619216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.623812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.382 [2024-11-20 06:43:11.624011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.382 [2024-11-20 06:43:11.624028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.382 [2024-11-20 06:43:11.631059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.383 [2024-11-20 06:43:11.631395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.383 [2024-11-20 06:43:11.631414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.383 [2024-11-20 06:43:11.638998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.383 [2024-11-20 06:43:11.639294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.383 [2024-11-20 06:43:11.639311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.383 [2024-11-20 06:43:11.645858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.383 [2024-11-20 06:43:11.646062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.383 [2024-11-20 06:43:11.646079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.383 [2024-11-20 06:43:11.649837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.383 [2024-11-20 06:43:11.650030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.383 [2024-11-20 06:43:11.650046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.383 [2024-11-20 06:43:11.653813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.383 [2024-11-20 06:43:11.654001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.383 [2024-11-20 06:43:11.654018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.645 [2024-11-20 06:43:11.658746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.645 [2024-11-20 06:43:11.658935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.645 [2024-11-20 06:43:11.658952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.645 [2024-11-20 06:43:11.666256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.645 [2024-11-20 06:43:11.666445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.645 [2024-11-20 06:43:11.666462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.645 [2024-11-20 06:43:11.670407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.645 [2024-11-20 06:43:11.670595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.645 [2024-11-20 06:43:11.670611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.645 [2024-11-20 06:43:11.674475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.645 [2024-11-20 06:43:11.674665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.645 [2024-11-20 06:43:11.674682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.645 [2024-11-20 06:43:11.678681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.645 [2024-11-20 06:43:11.678867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.645 [2024-11-20 06:43:11.678884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.645 [2024-11-20 06:43:11.686965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.687169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.687186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.694601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.694894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.694912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.701346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.701674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.701692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.707702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.707996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.708014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.712238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.712417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.712434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.715866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.716044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.716061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.720341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.720521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.720538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.724343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.724522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.724539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.731859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.732034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.732050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.735401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.735574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.735590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.739136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.739313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.739333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.742791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.743040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.743058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.746190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.746354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.746371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.749454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.749616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.749632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.753002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.753168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.753185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.756600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.756764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.756780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.759778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.759942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.759958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.762883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.763048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.763064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.766027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.766196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.766212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.768925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.769090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.769106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.773308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.773473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.773489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.776207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.776370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.776387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.783180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.783410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.783427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.646 [2024-11-20 06:43:11.786479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.646 [2024-11-20 06:43:11.786638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.646 [2024-11-20 06:43:11.786654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.789722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.789880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.789897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.796211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.796556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.796574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.799288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.799348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.799364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.802249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.802314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.802329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.805337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.805400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.805415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.811375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.811662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.811680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.815549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.815613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.815628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.821332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.821386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.821402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.827800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.827853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.827868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.831811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.831875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.831891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.834957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.835007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.835023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.838217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.838262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.838278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.841789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.841833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.841852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.845654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.845742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.845758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.855431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.855727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.855744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.863235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.863303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.863319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.870214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.870259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.870275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.874233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.874303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.874318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.878429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.878491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.878506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.884684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.647 [2024-11-20 06:43:11.884760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.647 [2024-11-20 06:43:11.884775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.647 [2024-11-20 06:43:11.891117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.648 [2024-11-20 06:43:11.891338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.648 [2024-11-20 06:43:11.891354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.648 [2024-11-20 06:43:11.896318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.648 [2024-11-20 06:43:11.896366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.648 [2024-11-20 06:43:11.896381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.648 [2024-11-20 06:43:11.900608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.648 [2024-11-20 06:43:11.900663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.648 [2024-11-20 06:43:11.900678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.648 [2024-11-20 06:43:11.904642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.648 [2024-11-20 06:43:11.904726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.648 [2024-11-20 06:43:11.904742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.648 [2024-11-20 06:43:11.909515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.648 [2024-11-20 06:43:11.909762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.648 [2024-11-20 06:43:11.909780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.648 [2024-11-20 06:43:11.914237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.648 [2024-11-20 06:43:11.914293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.648 [2024-11-20 06:43:11.914308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.648 [2024-11-20 06:43:11.917673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.648 [2024-11-20 06:43:11.917720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.648 [2024-11-20 06:43:11.917736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.920896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.920955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.920971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.924242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.924313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.924328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.927616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.927683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.927699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.932818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.932874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.932890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.939688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.939935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.939952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.943818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.943889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.943904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.947040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.947109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.947124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.950580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.950649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.950665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.953925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.953978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.953995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.957732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.957782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.957798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.960819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.960873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.960888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.964091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.964182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.964202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.968130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.968184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.968199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.975388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.975451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.910 [2024-11-20 06:43:11.975466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.910 [2024-11-20 06:43:11.978672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.910 [2024-11-20 06:43:11.978726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:11.978741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:11.981568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:11.981636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:11.981652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:11.985309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:11.985546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:11.985564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:11.990436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:11.990506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:11.990521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:11.995608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:11.995803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:11.995818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.006638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.006845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.006861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.016425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.016639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.016655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.027476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.027748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.027765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.033219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.033288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.033303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.037570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.037672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.037688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.043200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.043396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.043412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.048486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.048574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.048589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.056842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.057126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.057143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.062062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.062142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.062162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.066185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.066244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.066260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.070324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.070399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.070414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.074269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.074339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.074355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.078410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.078481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.078497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.082431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.082491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.082506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.086366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.086441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.086457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.090419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.090515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.090531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.094896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.911 [2024-11-20 06:43:12.094946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.911 [2024-11-20 06:43:12.094962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.911 [2024-11-20 06:43:12.101686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.912 [2024-11-20 06:43:12.101740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.912 [2024-11-20 06:43:12.101756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.912 [2024-11-20 06:43:12.105264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d63860) with pdu=0x200016eff3c8 00:32:51.912 [2024-11-20 06:43:12.105313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.912 [2024-11-20 06:43:12.105331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.912 5336.50 IOPS, 667.06 MiB/s 00:32:51.912 Latency(us) 00:32:51.912 [2024-11-20T05:43:12.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.912 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:51.912 nvme0n1 : 2.00 5338.20 667.28 0.00 0.00 2993.59 1228.80 15400.96 00:32:51.912 [2024-11-20T05:43:12.191Z] =================================================================================================================== 00:32:51.912 [2024-11-20T05:43:12.191Z] Total : 5338.20 667.28 0.00 0.00 2993.59 1228.80 15400.96 00:32:51.912 { 00:32:51.912 "results": [ 00:32:51.912 { 00:32:51.912 "job": "nvme0n1", 00:32:51.912 "core_mask": "0x2", 00:32:51.912 "workload": "randwrite", 00:32:51.912 "status": "finished", 00:32:51.912 "queue_depth": 16, 00:32:51.912 "io_size": 131072, 00:32:51.912 "runtime": 2.003108, 00:32:51.912 "iops": 5338.20443031529, 00:32:51.912 "mibps": 667.2755537894112, 00:32:51.912 "io_failed": 0, 00:32:51.912 "io_timeout": 0, 00:32:51.912 "avg_latency_us": 2993.592608248387, 00:32:51.912 "min_latency_us": 1228.8, 00:32:51.912 "max_latency_us": 15400.96 00:32:51.912 } 00:32:51.912 ], 00:32:51.912 "core_count": 1 00:32:51.912 } 00:32:51.912 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:51.912 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:51.912 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:51.912 | .driver_specific 00:32:51.912 | .nvme_error 00:32:51.912 | .status_code 00:32:51.912 | .command_transient_transport_error' 00:32:51.912 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 345 > 0 )) 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3015262 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3015262 ']' 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3015262 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3015262 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3015262' 00:32:52.173 killing process with pid 3015262 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3015262 00:32:52.173 Received shutdown signal, test time was about 2.000000 seconds 00:32:52.173 00:32:52.173 Latency(us) 00:32:52.173 [2024-11-20T05:43:12.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.173 [2024-11-20T05:43:12.452Z] =================================================================================================================== 00:32:52.173 [2024-11-20T05:43:12.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:52.173 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3015262 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3012841 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3012841 ']' 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3012841 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3012841 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3012841' 00:32:52.434 killing process with pid 3012841 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3012841 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3012841 00:32:52.434 00:32:52.434 real 0m16.388s 00:32:52.434 user 0m32.372s 00:32:52.434 sys 0m3.666s 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:52.434 ************************************ 00:32:52.434 END TEST nvmf_digest_error 00:32:52.434 ************************************ 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:52.434 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:52.434 rmmod nvme_tcp 00:32:52.695 rmmod nvme_fabrics 00:32:52.695 rmmod nvme_keyring 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3012841 ']' 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3012841 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 3012841 ']' 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 3012841 00:32:52.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3012841) - No such process 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 3012841 is not found' 00:32:52.695 Process with pid 3012841 is not found 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.695 06:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.607 06:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:54.607 00:32:54.607 real 0m43.185s 00:32:54.607 user 1m7.824s 00:32:54.607 sys 0m13.050s 00:32:54.607 06:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:54.607 06:43:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:54.607 ************************************ 00:32:54.607 END TEST nvmf_digest 00:32:54.607 ************************************ 00:32:54.868 06:43:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:54.868 06:43:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:54.868 06:43:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:54.868 06:43:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:54.868 06:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:54.868 06:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:54.868 06:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.868 ************************************ 00:32:54.868 START TEST nvmf_bdevperf 00:32:54.868 ************************************ 00:32:54.868 06:43:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:54.868 * Looking for test storage... 00:32:54.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:32:54.868 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:54.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.869 --rc genhtml_branch_coverage=1 00:32:54.869 --rc genhtml_function_coverage=1 00:32:54.869 --rc genhtml_legend=1 00:32:54.869 --rc geninfo_all_blocks=1 00:32:54.869 --rc geninfo_unexecuted_blocks=1 00:32:54.869 00:32:54.869 ' 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:54.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.869 --rc genhtml_branch_coverage=1 00:32:54.869 --rc genhtml_function_coverage=1 00:32:54.869 --rc genhtml_legend=1 00:32:54.869 --rc geninfo_all_blocks=1 00:32:54.869 --rc geninfo_unexecuted_blocks=1 00:32:54.869 00:32:54.869 ' 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:54.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.869 --rc genhtml_branch_coverage=1 00:32:54.869 --rc genhtml_function_coverage=1 00:32:54.869 --rc genhtml_legend=1 00:32:54.869 --rc geninfo_all_blocks=1 00:32:54.869 --rc geninfo_unexecuted_blocks=1 00:32:54.869 00:32:54.869 ' 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:54.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.869 --rc genhtml_branch_coverage=1 00:32:54.869 --rc genhtml_function_coverage=1 00:32:54.869 --rc genhtml_legend=1 00:32:54.869 --rc geninfo_all_blocks=1 00:32:54.869 --rc geninfo_unexecuted_blocks=1 00:32:54.869 00:32:54.869 ' 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.869 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:55.130 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:55.130 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:55.130 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:55.130 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:55.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:55.131 06:43:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:03.276 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.276 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:03.277 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:03.277 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:03.277 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:03.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:03.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:33:03.277 00:33:03.277 --- 10.0.0.2 ping statistics --- 00:33:03.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.277 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:03.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:03.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:33:03.277 00:33:03.277 --- 10.0.0.1 ping statistics --- 00:33:03.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.277 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3020257 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3020257 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3020257 ']' 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:03.277 06:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.277 [2024-11-20 06:43:22.806117] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:33:03.277 [2024-11-20 06:43:22.806191] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.277 [2024-11-20 06:43:22.906084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:03.277 [2024-11-20 06:43:22.958797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.277 [2024-11-20 06:43:22.958846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.277 [2024-11-20 06:43:22.958854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.277 [2024-11-20 06:43:22.958862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.277 [2024-11-20 06:43:22.958868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.277 [2024-11-20 06:43:22.960771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:03.277 [2024-11-20 06:43:22.960937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.277 [2024-11-20 06:43:22.960937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.539 [2024-11-20 06:43:23.673094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.539 Malloc0 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:03.539 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.540 [2024-11-20 06:43:23.748235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:03.540 { 00:33:03.540 "params": { 00:33:03.540 "name": "Nvme$subsystem", 00:33:03.540 "trtype": "$TEST_TRANSPORT", 00:33:03.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:03.540 "adrfam": "ipv4", 00:33:03.540 "trsvcid": "$NVMF_PORT", 00:33:03.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:03.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:03.540 "hdgst": ${hdgst:-false}, 00:33:03.540 "ddgst": ${ddgst:-false} 00:33:03.540 }, 00:33:03.540 "method": "bdev_nvme_attach_controller" 00:33:03.540 } 00:33:03.540 EOF 00:33:03.540 )") 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:03.540 06:43:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:03.540 "params": { 00:33:03.540 "name": "Nvme1", 00:33:03.540 "trtype": "tcp", 00:33:03.540 "traddr": "10.0.0.2", 00:33:03.540 "adrfam": "ipv4", 00:33:03.540 "trsvcid": "4420", 00:33:03.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:03.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:03.540 "hdgst": false, 00:33:03.540 "ddgst": false 00:33:03.540 }, 00:33:03.540 "method": "bdev_nvme_attach_controller" 00:33:03.540 }' 00:33:03.540 [2024-11-20 06:43:23.806228] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:33:03.540 [2024-11-20 06:43:23.806299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3020604 ] 00:33:03.802 [2024-11-20 06:43:23.899967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.802 [2024-11-20 06:43:23.953222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.063 Running I/O for 1 seconds... 00:33:05.005 8532.00 IOPS, 33.33 MiB/s 00:33:05.005 Latency(us) 00:33:05.005 [2024-11-20T05:43:25.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.005 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:05.005 Verification LBA range: start 0x0 length 0x4000 00:33:05.005 Nvme1n1 : 1.01 8608.63 33.63 0.00 0.00 14791.02 1078.61 14636.37 00:33:05.005 [2024-11-20T05:43:25.284Z] =================================================================================================================== 00:33:05.005 [2024-11-20T05:43:25.284Z] Total : 8608.63 33.63 0.00 0.00 14791.02 1078.61 14636.37 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3020864 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:05.266 { 00:33:05.266 "params": { 00:33:05.266 "name": "Nvme$subsystem", 00:33:05.266 "trtype": "$TEST_TRANSPORT", 00:33:05.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.266 "adrfam": "ipv4", 00:33:05.266 "trsvcid": "$NVMF_PORT", 00:33:05.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.266 "hdgst": ${hdgst:-false}, 00:33:05.266 "ddgst": ${ddgst:-false} 00:33:05.266 }, 00:33:05.266 "method": "bdev_nvme_attach_controller" 00:33:05.266 } 00:33:05.266 EOF 00:33:05.266 )") 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:05.266 06:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:05.266 "params": { 00:33:05.266 "name": "Nvme1", 00:33:05.266 "trtype": "tcp", 00:33:05.266 "traddr": "10.0.0.2", 00:33:05.267 "adrfam": "ipv4", 00:33:05.267 "trsvcid": "4420", 00:33:05.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:05.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:05.267 "hdgst": false, 00:33:05.267 "ddgst": false 00:33:05.267 }, 00:33:05.267 "method": "bdev_nvme_attach_controller" 00:33:05.267 }' 00:33:05.267 [2024-11-20 06:43:25.370964] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:33:05.267 [2024-11-20 06:43:25.371043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3020864 ] 00:33:05.267 [2024-11-20 06:43:25.463151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.267 [2024-11-20 06:43:25.517692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.838 Running I/O for 15 seconds... 00:33:07.721 10296.00 IOPS, 40.22 MiB/s [2024-11-20T05:43:28.576Z] 10742.50 IOPS, 41.96 MiB/s [2024-11-20T05:43:28.576Z] 06:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3020257 00:33:08.297 06:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:08.297 [2024-11-20 06:43:28.333693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.297 [2024-11-20 06:43:28.333732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.297 [2024-11-20 06:43:28.333753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.297 [2024-11-20 06:43:28.333764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.297 [2024-11-20 06:43:28.333782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.297 [2024-11-20 06:43:28.333791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.297 [2024-11-20 06:43:28.333801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.297 [2024-11-20 06:43:28.333808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.297 [2024-11-20 06:43:28.333819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.297 [2024-11-20 06:43:28.333826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.297 [2024-11-20 06:43:28.333837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.297 [2024-11-20 06:43:28.333844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.297 [2024-11-20 06:43:28.333854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.297 [2024-11-20 06:43:28.333861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.297 [2024-11-20 06:43:28.333874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.297 [2024-11-20 06:43:28.333884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.297 [2024-11-20 06:43:28.333894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.297 [2024-11-20 06:43:28.333904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.297 [2024-11-20 06:43:28.333917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.333929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.333941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.333949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.333962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.333973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.333984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.333995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.298 [2024-11-20 06:43:28.334412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.298 [2024-11-20 06:43:28.334647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.298 [2024-11-20 06:43:28.334654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.334985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.334992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.299 [2024-11-20 06:43:28.335392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.299 [2024-11-20 06:43:28.335401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.335990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.335997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.336007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.336014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.336023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.336030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.336040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.300 [2024-11-20 06:43:28.336048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.300 [2024-11-20 06:43:28.336057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9671d0 is same with the state(6) to be set 00:33:08.300 [2024-11-20 06:43:28.336067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:08.300 [2024-11-20 06:43:28.336073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:08.301 [2024-11-20 06:43:28.336081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80264 len:8 PRP1 0x0 PRP2 0x0 00:33:08.301 [2024-11-20 06:43:28.336091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.301 [2024-11-20 06:43:28.339693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.301 [2024-11-20 06:43:28.339750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.301 [2024-11-20 06:43:28.340531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.301 [2024-11-20 06:43:28.340548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.301 [2024-11-20 06:43:28.340557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.301 [2024-11-20 06:43:28.340775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.301 [2024-11-20 06:43:28.340992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.301 [2024-11-20 06:43:28.341001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.301 [2024-11-20 06:43:28.341010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.301 [2024-11-20 06:43:28.341019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.301 [2024-11-20 06:43:28.353806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.301 [2024-11-20 06:43:28.354329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.301 [2024-11-20 06:43:28.354369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.301 [2024-11-20 06:43:28.354380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.301 [2024-11-20 06:43:28.354617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.301 [2024-11-20 06:43:28.354836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.301 [2024-11-20 06:43:28.354845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.301 [2024-11-20 06:43:28.354854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.301 [2024-11-20 06:43:28.354862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.301 [2024-11-20 06:43:28.367707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.301 [2024-11-20 06:43:28.368420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.301 [2024-11-20 06:43:28.368460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.301 [2024-11-20 06:43:28.368472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.301 [2024-11-20 06:43:28.368713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.301 [2024-11-20 06:43:28.368933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.301 [2024-11-20 06:43:28.368942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.301 [2024-11-20 06:43:28.368950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.301 [2024-11-20 06:43:28.368963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.301 [2024-11-20 06:43:28.381579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.301 [2024-11-20 06:43:28.382121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.301 [2024-11-20 06:43:28.382142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.301 [2024-11-20 06:43:28.382150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.301 [2024-11-20 06:43:28.382372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.301 [2024-11-20 06:43:28.382589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.301 [2024-11-20 06:43:28.382597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.301 [2024-11-20 06:43:28.382604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.301 [2024-11-20 06:43:28.382612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.301 [2024-11-20 06:43:28.395401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.301 [2024-11-20 06:43:28.395921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.301 [2024-11-20 06:43:28.395938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.301 [2024-11-20 06:43:28.395946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.301 [2024-11-20 06:43:28.396168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.301 [2024-11-20 06:43:28.396385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.301 [2024-11-20 06:43:28.396394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.301 [2024-11-20 06:43:28.396401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.301 [2024-11-20 06:43:28.396408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.301 [2024-11-20 06:43:28.409208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.301 [2024-11-20 06:43:28.409825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.301 [2024-11-20 06:43:28.409867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.301 [2024-11-20 06:43:28.409878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.301 [2024-11-20 06:43:28.410117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.301 [2024-11-20 06:43:28.410346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.301 [2024-11-20 06:43:28.410359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.301 [2024-11-20 06:43:28.410367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.301 [2024-11-20 06:43:28.410375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.301 [2024-11-20 06:43:28.422977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.301 [2024-11-20 06:43:28.423644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.301 [2024-11-20 06:43:28.423692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.301 [2024-11-20 06:43:28.423704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.301 [2024-11-20 06:43:28.423943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.301 [2024-11-20 06:43:28.424172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.301 [2024-11-20 06:43:28.424182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.301 [2024-11-20 06:43:28.424190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.301 [2024-11-20 06:43:28.424198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.301 [2024-11-20 06:43:28.436787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.301 [2024-11-20 06:43:28.437379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.301 [2024-11-20 06:43:28.437401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.301 [2024-11-20 06:43:28.437409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.301 [2024-11-20 06:43:28.437626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.301 [2024-11-20 06:43:28.437843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.301 [2024-11-20 06:43:28.437852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.301 [2024-11-20 06:43:28.437859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.301 [2024-11-20 06:43:28.437866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.301 [2024-11-20 06:43:28.450666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.301 [2024-11-20 06:43:28.451224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.301 [2024-11-20 06:43:28.451254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.301 [2024-11-20 06:43:28.451263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.301 [2024-11-20 06:43:28.451486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.301 [2024-11-20 06:43:28.451704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.301 [2024-11-20 06:43:28.451712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.301 [2024-11-20 06:43:28.451719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.301 [2024-11-20 06:43:28.451727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.301 [2024-11-20 06:43:28.464544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.301 [2024-11-20 06:43:28.465106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.301 [2024-11-20 06:43:28.465151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.301 [2024-11-20 06:43:28.465172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.301 [2024-11-20 06:43:28.465418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.302 [2024-11-20 06:43:28.465639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.302 [2024-11-20 06:43:28.465648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.302 [2024-11-20 06:43:28.465656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.302 [2024-11-20 06:43:28.465664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.302 [2024-11-20 06:43:28.478492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.302 [2024-11-20 06:43:28.479063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.302 [2024-11-20 06:43:28.479108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.302 [2024-11-20 06:43:28.479121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.302 [2024-11-20 06:43:28.479373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.302 [2024-11-20 06:43:28.479596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.302 [2024-11-20 06:43:28.479605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.302 [2024-11-20 06:43:28.479613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.302 [2024-11-20 06:43:28.479621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.302 [2024-11-20 06:43:28.492431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.302 [2024-11-20 06:43:28.493117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.302 [2024-11-20 06:43:28.493171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.302 [2024-11-20 06:43:28.493184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.302 [2024-11-20 06:43:28.493424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.302 [2024-11-20 06:43:28.493645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.302 [2024-11-20 06:43:28.493655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.302 [2024-11-20 06:43:28.493663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.302 [2024-11-20 06:43:28.493671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.302 [2024-11-20 06:43:28.506278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.302 [2024-11-20 06:43:28.506925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.302 [2024-11-20 06:43:28.506973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.302 [2024-11-20 06:43:28.506985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.302 [2024-11-20 06:43:28.507237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.302 [2024-11-20 06:43:28.507460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.302 [2024-11-20 06:43:28.507475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.302 [2024-11-20 06:43:28.507484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.302 [2024-11-20 06:43:28.507493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.302 [2024-11-20 06:43:28.520089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.302 [2024-11-20 06:43:28.520664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.302 [2024-11-20 06:43:28.520714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.302 [2024-11-20 06:43:28.520726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.302 [2024-11-20 06:43:28.520969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.302 [2024-11-20 06:43:28.521206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.302 [2024-11-20 06:43:28.521219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.302 [2024-11-20 06:43:28.521226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.302 [2024-11-20 06:43:28.521236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.302 [2024-11-20 06:43:28.534046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.302 [2024-11-20 06:43:28.534703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.302 [2024-11-20 06:43:28.534756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.302 [2024-11-20 06:43:28.534768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.302 [2024-11-20 06:43:28.535013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.302 [2024-11-20 06:43:28.535244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.302 [2024-11-20 06:43:28.535254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.302 [2024-11-20 06:43:28.535263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.302 [2024-11-20 06:43:28.535271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.302 [2024-11-20 06:43:28.547882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.302 [2024-11-20 06:43:28.548488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.302 [2024-11-20 06:43:28.548541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.302 [2024-11-20 06:43:28.548553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.302 [2024-11-20 06:43:28.548798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.302 [2024-11-20 06:43:28.549020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.302 [2024-11-20 06:43:28.549029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.302 [2024-11-20 06:43:28.549038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.302 [2024-11-20 06:43:28.549047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.302 [2024-11-20 06:43:28.561670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.302 [2024-11-20 06:43:28.562273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.302 [2024-11-20 06:43:28.562325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.302 [2024-11-20 06:43:28.562339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.302 [2024-11-20 06:43:28.562585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.302 [2024-11-20 06:43:28.562807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.302 [2024-11-20 06:43:28.562817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.302 [2024-11-20 06:43:28.562826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.302 [2024-11-20 06:43:28.562836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.567 [2024-11-20 06:43:28.575489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.567 [2024-11-20 06:43:28.576071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.567 [2024-11-20 06:43:28.576096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.567 [2024-11-20 06:43:28.576105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.567 [2024-11-20 06:43:28.576332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.567 [2024-11-20 06:43:28.576551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.567 [2024-11-20 06:43:28.576560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.567 [2024-11-20 06:43:28.576568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.567 [2024-11-20 06:43:28.576575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.567 [2024-11-20 06:43:28.589379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.567 [2024-11-20 06:43:28.589936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.567 [2024-11-20 06:43:28.589958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.567 [2024-11-20 06:43:28.589967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.567 [2024-11-20 06:43:28.590196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.567 [2024-11-20 06:43:28.590416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.567 [2024-11-20 06:43:28.590424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.567 [2024-11-20 06:43:28.590432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.567 [2024-11-20 06:43:28.590440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.567 [2024-11-20 06:43:28.603268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.567 [2024-11-20 06:43:28.603710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.567 [2024-11-20 06:43:28.603741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.567 [2024-11-20 06:43:28.603751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.567 [2024-11-20 06:43:28.603970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.567 [2024-11-20 06:43:28.604198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.567 [2024-11-20 06:43:28.604208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.567 [2024-11-20 06:43:28.604215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.567 [2024-11-20 06:43:28.604223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.567 [2024-11-20 06:43:28.617041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.567 [2024-11-20 06:43:28.617588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.567 [2024-11-20 06:43:28.617612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.567 [2024-11-20 06:43:28.617621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.567 [2024-11-20 06:43:28.617838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.567 [2024-11-20 06:43:28.618057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.567 [2024-11-20 06:43:28.618067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.567 [2024-11-20 06:43:28.618075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.567 [2024-11-20 06:43:28.618083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.567 [2024-11-20 06:43:28.630927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.567 [2024-11-20 06:43:28.631531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.567 [2024-11-20 06:43:28.631555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.567 [2024-11-20 06:43:28.631563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.567 [2024-11-20 06:43:28.631781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.567 [2024-11-20 06:43:28.631999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.567 [2024-11-20 06:43:28.632017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.567 [2024-11-20 06:43:28.632026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.567 [2024-11-20 06:43:28.632034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.567 [2024-11-20 06:43:28.644864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.567 [2024-11-20 06:43:28.645569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.567 [2024-11-20 06:43:28.645633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.567 [2024-11-20 06:43:28.645646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.567 [2024-11-20 06:43:28.645906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.567 [2024-11-20 06:43:28.646130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.567 [2024-11-20 06:43:28.646139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.567 [2024-11-20 06:43:28.646147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.567 [2024-11-20 06:43:28.646156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.567 [2024-11-20 06:43:28.658650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.567 [2024-11-20 06:43:28.659459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.567 [2024-11-20 06:43:28.659524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.567 [2024-11-20 06:43:28.659537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.567 [2024-11-20 06:43:28.659789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.567 [2024-11-20 06:43:28.660013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.567 [2024-11-20 06:43:28.660022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.567 [2024-11-20 06:43:28.660030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.567 [2024-11-20 06:43:28.660039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.567 [2024-11-20 06:43:28.672501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.567 [2024-11-20 06:43:28.673214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.567 [2024-11-20 06:43:28.673278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.567 [2024-11-20 06:43:28.673292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.567 [2024-11-20 06:43:28.673546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.567 [2024-11-20 06:43:28.673771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.567 [2024-11-20 06:43:28.673783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.567 [2024-11-20 06:43:28.673791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.567 [2024-11-20 06:43:28.673800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.567 [2024-11-20 06:43:28.686477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.567 [2024-11-20 06:43:28.687157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.567 [2024-11-20 06:43:28.687231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.567 [2024-11-20 06:43:28.687247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.568 [2024-11-20 06:43:28.687499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.568 [2024-11-20 06:43:28.687723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.568 [2024-11-20 06:43:28.687736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.568 [2024-11-20 06:43:28.687751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.568 [2024-11-20 06:43:28.687760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.568 [2024-11-20 06:43:28.700398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.568 [2024-11-20 06:43:28.701029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.568 [2024-11-20 06:43:28.701057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.568 [2024-11-20 06:43:28.701066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.568 [2024-11-20 06:43:28.701297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.568 [2024-11-20 06:43:28.701517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.568 [2024-11-20 06:43:28.701527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.568 [2024-11-20 06:43:28.701536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.568 [2024-11-20 06:43:28.701544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.568 [2024-11-20 06:43:28.714153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.568 [2024-11-20 06:43:28.714721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.568 [2024-11-20 06:43:28.714745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.568 [2024-11-20 06:43:28.714753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.568 [2024-11-20 06:43:28.714971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.568 [2024-11-20 06:43:28.715196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.568 [2024-11-20 06:43:28.715206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.568 [2024-11-20 06:43:28.715214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.568 [2024-11-20 06:43:28.715222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.568 [2024-11-20 06:43:28.728044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.568 [2024-11-20 06:43:28.728398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.568 [2024-11-20 06:43:28.728433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.568 [2024-11-20 06:43:28.728441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.568 [2024-11-20 06:43:28.728666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.568 [2024-11-20 06:43:28.728885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.568 [2024-11-20 06:43:28.728894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.568 [2024-11-20 06:43:28.728902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.568 [2024-11-20 06:43:28.728910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.568 [2024-11-20 06:43:28.741972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.568 [2024-11-20 06:43:28.742596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.568 [2024-11-20 06:43:28.742620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.568 [2024-11-20 06:43:28.742628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.568 [2024-11-20 06:43:28.742849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.568 [2024-11-20 06:43:28.743067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.568 [2024-11-20 06:43:28.743076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.568 [2024-11-20 06:43:28.743084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.568 [2024-11-20 06:43:28.743092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.568 [2024-11-20 06:43:28.755930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.568 [2024-11-20 06:43:28.756470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.568 [2024-11-20 06:43:28.756494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.568 [2024-11-20 06:43:28.756502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.568 [2024-11-20 06:43:28.756720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.568 [2024-11-20 06:43:28.756938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.568 [2024-11-20 06:43:28.756949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.568 [2024-11-20 06:43:28.756956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.568 [2024-11-20 06:43:28.756964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.568 [2024-11-20 06:43:28.769822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.568 [2024-11-20 06:43:28.770310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.568 [2024-11-20 06:43:28.770335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.568 [2024-11-20 06:43:28.770343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.568 [2024-11-20 06:43:28.770560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.568 [2024-11-20 06:43:28.770779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.568 [2024-11-20 06:43:28.770797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.568 [2024-11-20 06:43:28.770805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.568 [2024-11-20 06:43:28.770813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.568 [2024-11-20 06:43:28.783657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.568 [2024-11-20 06:43:28.784292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.568 [2024-11-20 06:43:28.784357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.568 [2024-11-20 06:43:28.784377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.568 [2024-11-20 06:43:28.784629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.568 [2024-11-20 06:43:28.784853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.568 [2024-11-20 06:43:28.784862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.568 [2024-11-20 06:43:28.784870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.568 [2024-11-20 06:43:28.784879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.568 [2024-11-20 06:43:28.797538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.568 [2024-11-20 06:43:28.798250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.568 [2024-11-20 06:43:28.798314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.568 [2024-11-20 06:43:28.798328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.568 [2024-11-20 06:43:28.798582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.568 [2024-11-20 06:43:28.798806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.568 [2024-11-20 06:43:28.798816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.568 [2024-11-20 06:43:28.798825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.568 [2024-11-20 06:43:28.798834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.568 [2024-11-20 06:43:28.811481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.568 [2024-11-20 06:43:28.812043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.568 [2024-11-20 06:43:28.812108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.568 [2024-11-20 06:43:28.812121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.568 [2024-11-20 06:43:28.812386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.568 [2024-11-20 06:43:28.812611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.568 [2024-11-20 06:43:28.812620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.568 [2024-11-20 06:43:28.812629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.568 [2024-11-20 06:43:28.812638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.568 [2024-11-20 06:43:28.825276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.568 [2024-11-20 06:43:28.825863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.568 [2024-11-20 06:43:28.825891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.569 [2024-11-20 06:43:28.825900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.569 [2024-11-20 06:43:28.826119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.569 [2024-11-20 06:43:28.826356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.569 [2024-11-20 06:43:28.826367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.569 [2024-11-20 06:43:28.826375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.569 [2024-11-20 06:43:28.826382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.569 [2024-11-20 06:43:28.839213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.569 [2024-11-20 06:43:28.839775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.569 [2024-11-20 06:43:28.839799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.569 [2024-11-20 06:43:28.839807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.569 [2024-11-20 06:43:28.840025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.569 [2024-11-20 06:43:28.840253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.569 [2024-11-20 06:43:28.840262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.569 [2024-11-20 06:43:28.840270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.569 [2024-11-20 06:43:28.840278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.832 [2024-11-20 06:43:28.853115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.832 [2024-11-20 06:43:28.853756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.832 [2024-11-20 06:43:28.853821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.832 [2024-11-20 06:43:28.853834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.832 [2024-11-20 06:43:28.854086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.832 [2024-11-20 06:43:28.854325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.833 [2024-11-20 06:43:28.854337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.833 [2024-11-20 06:43:28.854345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.833 [2024-11-20 06:43:28.854354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.833 [2024-11-20 06:43:28.867018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.833 [2024-11-20 06:43:28.867683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.833 [2024-11-20 06:43:28.867747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.833 [2024-11-20 06:43:28.867760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.833 [2024-11-20 06:43:28.868012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.833 [2024-11-20 06:43:28.868248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.833 [2024-11-20 06:43:28.868258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.833 [2024-11-20 06:43:28.868275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.833 [2024-11-20 06:43:28.868285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.833 [2024-11-20 06:43:28.880945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.833 [2024-11-20 06:43:28.881518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.833 [2024-11-20 06:43:28.881548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.833 [2024-11-20 06:43:28.881557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.833 [2024-11-20 06:43:28.881776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.833 [2024-11-20 06:43:28.881995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.833 [2024-11-20 06:43:28.882005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.833 [2024-11-20 06:43:28.882012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.833 [2024-11-20 06:43:28.882021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.833 [2024-11-20 06:43:28.894868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.833 [2024-11-20 06:43:28.895548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.833 [2024-11-20 06:43:28.895612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.833 [2024-11-20 06:43:28.895624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.833 [2024-11-20 06:43:28.895876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.833 [2024-11-20 06:43:28.896099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.833 [2024-11-20 06:43:28.896109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.833 [2024-11-20 06:43:28.896117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.833 [2024-11-20 06:43:28.896126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.833 8763.33 IOPS, 34.23 MiB/s [2024-11-20T05:43:29.112Z] [2024-11-20 06:43:28.910225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.833 [2024-11-20 06:43:28.910866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.833 [2024-11-20 06:43:28.910895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.833 [2024-11-20 06:43:28.910904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.833 [2024-11-20 06:43:28.911125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.833 [2024-11-20 06:43:28.911352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.833 [2024-11-20 06:43:28.911362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.833 [2024-11-20 06:43:28.911370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.833 [2024-11-20 06:43:28.911377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.833 [2024-11-20 06:43:28.924006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.833 [2024-11-20 06:43:28.924581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.833 [2024-11-20 06:43:28.924606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.833 [2024-11-20 06:43:28.924614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.833 [2024-11-20 06:43:28.924833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.833 [2024-11-20 06:43:28.925051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.833 [2024-11-20 06:43:28.925061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.833 [2024-11-20 06:43:28.925068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.833 [2024-11-20 06:43:28.925076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.833 [2024-11-20 06:43:28.937917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.833 [2024-11-20 06:43:28.938539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.833 [2024-11-20 06:43:28.938562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.833 [2024-11-20 06:43:28.938570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.833 [2024-11-20 06:43:28.938790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.833 [2024-11-20 06:43:28.939008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.833 [2024-11-20 06:43:28.939018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.833 [2024-11-20 06:43:28.939025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.833 [2024-11-20 06:43:28.939032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.833 [2024-11-20 06:43:28.951686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.833 [2024-11-20 06:43:28.952277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.833 [2024-11-20 06:43:28.952322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.833 [2024-11-20 06:43:28.952332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.833 [2024-11-20 06:43:28.952569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.833 [2024-11-20 06:43:28.952793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.833 [2024-11-20 06:43:28.952803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.833 [2024-11-20 06:43:28.952810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.833 [2024-11-20 06:43:28.952818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.833 [2024-11-20 06:43:28.965471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.833 [2024-11-20 06:43:28.966034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.833 [2024-11-20 06:43:28.966060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.833 [2024-11-20 06:43:28.966075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.833 [2024-11-20 06:43:28.966304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.833 [2024-11-20 06:43:28.966536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.834 [2024-11-20 06:43:28.966547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.834 [2024-11-20 06:43:28.966555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.834 [2024-11-20 06:43:28.966563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.834 [2024-11-20 06:43:28.979444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.834 [2024-11-20 06:43:28.980011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.834 [2024-11-20 06:43:28.980036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.834 [2024-11-20 06:43:28.980044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.834 [2024-11-20 06:43:28.980272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.834 [2024-11-20 06:43:28.980490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.834 [2024-11-20 06:43:28.980500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.834 [2024-11-20 06:43:28.980508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.834 [2024-11-20 06:43:28.980515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.834 [2024-11-20 06:43:28.993383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.834 [2024-11-20 06:43:28.993955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.834 [2024-11-20 06:43:28.993980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.834 [2024-11-20 06:43:28.993988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.834 [2024-11-20 06:43:28.994216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.834 [2024-11-20 06:43:28.994436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.834 [2024-11-20 06:43:28.994446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.834 [2024-11-20 06:43:28.994454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.834 [2024-11-20 06:43:28.994461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.834 [2024-11-20 06:43:29.007329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.834 [2024-11-20 06:43:29.007895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.834 [2024-11-20 06:43:29.007919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.834 [2024-11-20 06:43:29.007928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.834 [2024-11-20 06:43:29.008146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.834 [2024-11-20 06:43:29.008383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.834 [2024-11-20 06:43:29.008392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.834 [2024-11-20 06:43:29.008400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.834 [2024-11-20 06:43:29.008408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.834 [2024-11-20 06:43:29.021283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.834 [2024-11-20 06:43:29.021831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.834 [2024-11-20 06:43:29.021853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.834 [2024-11-20 06:43:29.021862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.834 [2024-11-20 06:43:29.022080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.834 [2024-11-20 06:43:29.022311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.834 [2024-11-20 06:43:29.022322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.834 [2024-11-20 06:43:29.022329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.834 [2024-11-20 06:43:29.022337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.834 [2024-11-20 06:43:29.035191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.834 [2024-11-20 06:43:29.035868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.834 [2024-11-20 06:43:29.035932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.834 [2024-11-20 06:43:29.035945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.834 [2024-11-20 06:43:29.036210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.834 [2024-11-20 06:43:29.036435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.834 [2024-11-20 06:43:29.036444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.834 [2024-11-20 06:43:29.036453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.834 [2024-11-20 06:43:29.036462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.834 [2024-11-20 06:43:29.049086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.834 [2024-11-20 06:43:29.049804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.834 [2024-11-20 06:43:29.049867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.834 [2024-11-20 06:43:29.049880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.834 [2024-11-20 06:43:29.050133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.834 [2024-11-20 06:43:29.050368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.834 [2024-11-20 06:43:29.050378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.834 [2024-11-20 06:43:29.050393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.834 [2024-11-20 06:43:29.050403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.834 [2024-11-20 06:43:29.063030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.834 [2024-11-20 06:43:29.063723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.834 [2024-11-20 06:43:29.063786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.834 [2024-11-20 06:43:29.063798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.834 [2024-11-20 06:43:29.064050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.834 [2024-11-20 06:43:29.064287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.834 [2024-11-20 06:43:29.064298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.834 [2024-11-20 06:43:29.064306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.834 [2024-11-20 06:43:29.064315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.834 [2024-11-20 06:43:29.076980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.834 [2024-11-20 06:43:29.077606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.834 [2024-11-20 06:43:29.077636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.834 [2024-11-20 06:43:29.077646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.834 [2024-11-20 06:43:29.077866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.834 [2024-11-20 06:43:29.078085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.834 [2024-11-20 06:43:29.078094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.834 [2024-11-20 06:43:29.078102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.834 [2024-11-20 06:43:29.078109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.834 [2024-11-20 06:43:29.090928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.834 [2024-11-20 06:43:29.091573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.835 [2024-11-20 06:43:29.091637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.835 [2024-11-20 06:43:29.091650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.835 [2024-11-20 06:43:29.091902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.835 [2024-11-20 06:43:29.092126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.835 [2024-11-20 06:43:29.092136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.835 [2024-11-20 06:43:29.092145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.835 [2024-11-20 06:43:29.092154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.835 [2024-11-20 06:43:29.104811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.835 [2024-11-20 06:43:29.105475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.835 [2024-11-20 06:43:29.105538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:08.835 [2024-11-20 06:43:29.105550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:08.835 [2024-11-20 06:43:29.105802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:08.835 [2024-11-20 06:43:29.106026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.835 [2024-11-20 06:43:29.106035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.835 [2024-11-20 06:43:29.106043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.835 [2024-11-20 06:43:29.106053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.118624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.119225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.119255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.119264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.119485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.119705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.119714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.119722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.119730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.132565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.133140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.133175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.133185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.133405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.133624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.133634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.133643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.133650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.146496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.146939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.146966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.146984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.147216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.147437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.147448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.147457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.147465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.160315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.160968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.161031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.161043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.161307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.161532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.161542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.161550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.161559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.174223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.174963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.175026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.175038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.175320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.175545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.175554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.175563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.175573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.188027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.188736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.188799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.188812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.189064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.189308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.189319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.189327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.189336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.201984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.202702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.202765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.202778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.203030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.203271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.203281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.203290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.203299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.215900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.216561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.216624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.216637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.216888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.217111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.217121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.217129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.217138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.229765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.230422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.230486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.230498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.230750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.230973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.230982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.230998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.231008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.243643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.244233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.244263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.244273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.244493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.244711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.244721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.244729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.244737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.257549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.258206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.258269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.258282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.258535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.258759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.258768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.258776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.258785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.271442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.272123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.272197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.272211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.097 [2024-11-20 06:43:29.272464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.097 [2024-11-20 06:43:29.272688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.097 [2024-11-20 06:43:29.272697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.097 [2024-11-20 06:43:29.272706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.097 [2024-11-20 06:43:29.272715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.097 [2024-11-20 06:43:29.285345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.097 [2024-11-20 06:43:29.286081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.097 [2024-11-20 06:43:29.286143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.097 [2024-11-20 06:43:29.286156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.098 [2024-11-20 06:43:29.286422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.098 [2024-11-20 06:43:29.286646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.098 [2024-11-20 06:43:29.286655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.098 [2024-11-20 06:43:29.286664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.098 [2024-11-20 06:43:29.286672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.098 [2024-11-20 06:43:29.299288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.098 [2024-11-20 06:43:29.299973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.098 [2024-11-20 06:43:29.300035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.098 [2024-11-20 06:43:29.300048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.098 [2024-11-20 06:43:29.300316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.098 [2024-11-20 06:43:29.300542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.098 [2024-11-20 06:43:29.300550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.098 [2024-11-20 06:43:29.300559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.098 [2024-11-20 06:43:29.300568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.098 [2024-11-20 06:43:29.313181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.098 [2024-11-20 06:43:29.313891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.098 [2024-11-20 06:43:29.313955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.098 [2024-11-20 06:43:29.313967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.098 [2024-11-20 06:43:29.314235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.098 [2024-11-20 06:43:29.314460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.098 [2024-11-20 06:43:29.314469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.098 [2024-11-20 06:43:29.314477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.098 [2024-11-20 06:43:29.314486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.098 [2024-11-20 06:43:29.327102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.098 [2024-11-20 06:43:29.327725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.098 [2024-11-20 06:43:29.327788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.098 [2024-11-20 06:43:29.327808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.098 [2024-11-20 06:43:29.328060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.098 [2024-11-20 06:43:29.328299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.098 [2024-11-20 06:43:29.328309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.098 [2024-11-20 06:43:29.328317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.098 [2024-11-20 06:43:29.328326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.098 [2024-11-20 06:43:29.340988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.098 [2024-11-20 06:43:29.341717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.098 [2024-11-20 06:43:29.341781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.098 [2024-11-20 06:43:29.341795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.098 [2024-11-20 06:43:29.342048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.098 [2024-11-20 06:43:29.342289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.098 [2024-11-20 06:43:29.342301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.098 [2024-11-20 06:43:29.342310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.098 [2024-11-20 06:43:29.342320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.098 [2024-11-20 06:43:29.354756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.098 [2024-11-20 06:43:29.355228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.098 [2024-11-20 06:43:29.355259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.098 [2024-11-20 06:43:29.355268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.098 [2024-11-20 06:43:29.355489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.098 [2024-11-20 06:43:29.355707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.098 [2024-11-20 06:43:29.355718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.098 [2024-11-20 06:43:29.355726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.098 [2024-11-20 06:43:29.355733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.098 [2024-11-20 06:43:29.368727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.098 [2024-11-20 06:43:29.369402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.098 [2024-11-20 06:43:29.369465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.098 [2024-11-20 06:43:29.369477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.098 [2024-11-20 06:43:29.369729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.098 [2024-11-20 06:43:29.369960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.098 [2024-11-20 06:43:29.369970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.098 [2024-11-20 06:43:29.369979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.098 [2024-11-20 06:43:29.369988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.360 [2024-11-20 06:43:29.382666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.360 [2024-11-20 06:43:29.383402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.360 [2024-11-20 06:43:29.383465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.360 [2024-11-20 06:43:29.383478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.360 [2024-11-20 06:43:29.383731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.360 [2024-11-20 06:43:29.383955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.360 [2024-11-20 06:43:29.383965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.360 [2024-11-20 06:43:29.383973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.360 [2024-11-20 06:43:29.383982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.360 [2024-11-20 06:43:29.396626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.360 [2024-11-20 06:43:29.397356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.360 [2024-11-20 06:43:29.397419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.360 [2024-11-20 06:43:29.397432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.360 [2024-11-20 06:43:29.397684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.360 [2024-11-20 06:43:29.397909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.360 [2024-11-20 06:43:29.397918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.361 [2024-11-20 06:43:29.397927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.361 [2024-11-20 06:43:29.397935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.361 [2024-11-20 06:43:29.410564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.361 [2024-11-20 06:43:29.411317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.361 [2024-11-20 06:43:29.411380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.361 [2024-11-20 06:43:29.411393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.361 [2024-11-20 06:43:29.411645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.361 [2024-11-20 06:43:29.411869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.361 [2024-11-20 06:43:29.411878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.361 [2024-11-20 06:43:29.411886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.361 [2024-11-20 06:43:29.411910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.361 [2024-11-20 06:43:29.424345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.361 [2024-11-20 06:43:29.425027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.361 [2024-11-20 06:43:29.425089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.361 [2024-11-20 06:43:29.425102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.361 [2024-11-20 06:43:29.425373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.361 [2024-11-20 06:43:29.425598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.361 [2024-11-20 06:43:29.425608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.361 [2024-11-20 06:43:29.425616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.361 [2024-11-20 06:43:29.425625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.361 [2024-11-20 06:43:29.438228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.361 [2024-11-20 06:43:29.438845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.361 [2024-11-20 06:43:29.438907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.361 [2024-11-20 06:43:29.438920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.361 [2024-11-20 06:43:29.439187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.361 [2024-11-20 06:43:29.439412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.361 [2024-11-20 06:43:29.439422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.361 [2024-11-20 06:43:29.439431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.361 [2024-11-20 06:43:29.439440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.361 [2024-11-20 06:43:29.452053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.361 [2024-11-20 06:43:29.452671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.361 [2024-11-20 06:43:29.452734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.361 [2024-11-20 06:43:29.452747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.361 [2024-11-20 06:43:29.452999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.361 [2024-11-20 06:43:29.453239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.361 [2024-11-20 06:43:29.453249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.361 [2024-11-20 06:43:29.453257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.361 [2024-11-20 06:43:29.453266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.361 [2024-11-20 06:43:29.465873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.361 [2024-11-20 06:43:29.466629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.361 [2024-11-20 06:43:29.466691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.361 [2024-11-20 06:43:29.466704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.361 [2024-11-20 06:43:29.466957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.361 [2024-11-20 06:43:29.467196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.361 [2024-11-20 06:43:29.467206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.361 [2024-11-20 06:43:29.467215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.361 [2024-11-20 06:43:29.467224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.361 [2024-11-20 06:43:29.478534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.361 [2024-11-20 06:43:29.479070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.361 [2024-11-20 06:43:29.479094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.361 [2024-11-20 06:43:29.479100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.361 [2024-11-20 06:43:29.479261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.361 [2024-11-20 06:43:29.479413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.361 [2024-11-20 06:43:29.479419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.361 [2024-11-20 06:43:29.479425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.361 [2024-11-20 06:43:29.479431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.361 [2024-11-20 06:43:29.491169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.361 [2024-11-20 06:43:29.491751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.361 [2024-11-20 06:43:29.491799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.361 [2024-11-20 06:43:29.491808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.361 [2024-11-20 06:43:29.491984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.361 [2024-11-20 06:43:29.492138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.361 [2024-11-20 06:43:29.492144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.361 [2024-11-20 06:43:29.492151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.361 [2024-11-20 06:43:29.492170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.361 [2024-11-20 06:43:29.503853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.361 [2024-11-20 06:43:29.504475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.361 [2024-11-20 06:43:29.504521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.361 [2024-11-20 06:43:29.504530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.361 [2024-11-20 06:43:29.504709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.361 [2024-11-20 06:43:29.504863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.361 [2024-11-20 06:43:29.504869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.361 [2024-11-20 06:43:29.504875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.361 [2024-11-20 06:43:29.504881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.361 [2024-11-20 06:43:29.516502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.361 [2024-11-20 06:43:29.517096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.361 [2024-11-20 06:43:29.517139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.361 [2024-11-20 06:43:29.517147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.361 [2024-11-20 06:43:29.517328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.361 [2024-11-20 06:43:29.517482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.361 [2024-11-20 06:43:29.517489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.361 [2024-11-20 06:43:29.517494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.361 [2024-11-20 06:43:29.517500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.361 [2024-11-20 06:43:29.529097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.361 [2024-11-20 06:43:29.529602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.361 [2024-11-20 06:43:29.529621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.361 [2024-11-20 06:43:29.529627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.361 [2024-11-20 06:43:29.529778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.361 [2024-11-20 06:43:29.529928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.362 [2024-11-20 06:43:29.529934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.362 [2024-11-20 06:43:29.529940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.362 [2024-11-20 06:43:29.529945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.362 [2024-11-20 06:43:29.541691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.362 [2024-11-20 06:43:29.542220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.362 [2024-11-20 06:43:29.542245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.362 [2024-11-20 06:43:29.542251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.362 [2024-11-20 06:43:29.542409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.362 [2024-11-20 06:43:29.542559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.362 [2024-11-20 06:43:29.542569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.362 [2024-11-20 06:43:29.542574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.362 [2024-11-20 06:43:29.542580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.362 [2024-11-20 06:43:29.554322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.362 [2024-11-20 06:43:29.554909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.362 [2024-11-20 06:43:29.554945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.362 [2024-11-20 06:43:29.554953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.362 [2024-11-20 06:43:29.555120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.362 [2024-11-20 06:43:29.555281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.362 [2024-11-20 06:43:29.555288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.362 [2024-11-20 06:43:29.555295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.362 [2024-11-20 06:43:29.555301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.362 [2024-11-20 06:43:29.566904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.362 [2024-11-20 06:43:29.567471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.362 [2024-11-20 06:43:29.567506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.362 [2024-11-20 06:43:29.567515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.362 [2024-11-20 06:43:29.567684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.362 [2024-11-20 06:43:29.567837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.362 [2024-11-20 06:43:29.567843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.362 [2024-11-20 06:43:29.567848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.362 [2024-11-20 06:43:29.567855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.362 [2024-11-20 06:43:29.579605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.362 [2024-11-20 06:43:29.580146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.362 [2024-11-20 06:43:29.580185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.362 [2024-11-20 06:43:29.580193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.362 [2024-11-20 06:43:29.580359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.362 [2024-11-20 06:43:29.580511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.362 [2024-11-20 06:43:29.580517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.362 [2024-11-20 06:43:29.580523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.362 [2024-11-20 06:43:29.580532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.362 [2024-11-20 06:43:29.592271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.362 [2024-11-20 06:43:29.592846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.362 [2024-11-20 06:43:29.592878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.362 [2024-11-20 06:43:29.592887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.362 [2024-11-20 06:43:29.593052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.362 [2024-11-20 06:43:29.593213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.362 [2024-11-20 06:43:29.593221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.362 [2024-11-20 06:43:29.593226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.362 [2024-11-20 06:43:29.593232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.362 [2024-11-20 06:43:29.604984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.362 [2024-11-20 06:43:29.605541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.362 [2024-11-20 06:43:29.605572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.362 [2024-11-20 06:43:29.605581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.362 [2024-11-20 06:43:29.605745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.362 [2024-11-20 06:43:29.605897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.362 [2024-11-20 06:43:29.605904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.362 [2024-11-20 06:43:29.605909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.362 [2024-11-20 06:43:29.605915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.362 [2024-11-20 06:43:29.617664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.362 [2024-11-20 06:43:29.618290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.362 [2024-11-20 06:43:29.618320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.362 [2024-11-20 06:43:29.618328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.362 [2024-11-20 06:43:29.618492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.362 [2024-11-20 06:43:29.618644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.362 [2024-11-20 06:43:29.618650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.362 [2024-11-20 06:43:29.618655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.362 [2024-11-20 06:43:29.618661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.362 [2024-11-20 06:43:29.630269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.362 [2024-11-20 06:43:29.630842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.362 [2024-11-20 06:43:29.630872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.362 [2024-11-20 06:43:29.630881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.362 [2024-11-20 06:43:29.631045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.362 [2024-11-20 06:43:29.631205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.362 [2024-11-20 06:43:29.631212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.362 [2024-11-20 06:43:29.631217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.362 [2024-11-20 06:43:29.631224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.625 [2024-11-20 06:43:29.642969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.625 [2024-11-20 06:43:29.643467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.625 [2024-11-20 06:43:29.643497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.625 [2024-11-20 06:43:29.643506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.625 [2024-11-20 06:43:29.643673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.625 [2024-11-20 06:43:29.643825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.625 [2024-11-20 06:43:29.643831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.625 [2024-11-20 06:43:29.643836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.625 [2024-11-20 06:43:29.643842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.625 [2024-11-20 06:43:29.655724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.625 [2024-11-20 06:43:29.656189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.625 [2024-11-20 06:43:29.656220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.625 [2024-11-20 06:43:29.656228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.625 [2024-11-20 06:43:29.656392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.625 [2024-11-20 06:43:29.656544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.625 [2024-11-20 06:43:29.656550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.625 [2024-11-20 06:43:29.656555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.625 [2024-11-20 06:43:29.656561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.625 [2024-11-20 06:43:29.668306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.625 [2024-11-20 06:43:29.668724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.625 [2024-11-20 06:43:29.668754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.625 [2024-11-20 06:43:29.668762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.625 [2024-11-20 06:43:29.668930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.625 [2024-11-20 06:43:29.669090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.625 [2024-11-20 06:43:29.669097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.625 [2024-11-20 06:43:29.669103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.625 [2024-11-20 06:43:29.669108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.625 [2024-11-20 06:43:29.680995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.625 [2024-11-20 06:43:29.681572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.625 [2024-11-20 06:43:29.681602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.625 [2024-11-20 06:43:29.681611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.625 [2024-11-20 06:43:29.681775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.625 [2024-11-20 06:43:29.681926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.625 [2024-11-20 06:43:29.681932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.625 [2024-11-20 06:43:29.681937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.625 [2024-11-20 06:43:29.681943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.625 [2024-11-20 06:43:29.693675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.625 [2024-11-20 06:43:29.694169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.625 [2024-11-20 06:43:29.694185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.625 [2024-11-20 06:43:29.694191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.625 [2024-11-20 06:43:29.694340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.625 [2024-11-20 06:43:29.694489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.625 [2024-11-20 06:43:29.694494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.625 [2024-11-20 06:43:29.694500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.625 [2024-11-20 06:43:29.694504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.625 [2024-11-20 06:43:29.706375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.625 [2024-11-20 06:43:29.706905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.625 [2024-11-20 06:43:29.706935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.625 [2024-11-20 06:43:29.706944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.625 [2024-11-20 06:43:29.707108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.625 [2024-11-20 06:43:29.707267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.625 [2024-11-20 06:43:29.707277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.625 [2024-11-20 06:43:29.707283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.626 [2024-11-20 06:43:29.707288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.626 [2024-11-20 06:43:29.719037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.626 [2024-11-20 06:43:29.719584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.626 [2024-11-20 06:43:29.719615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.626 [2024-11-20 06:43:29.719624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.626 [2024-11-20 06:43:29.719788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.626 [2024-11-20 06:43:29.719940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.626 [2024-11-20 06:43:29.719946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.626 [2024-11-20 06:43:29.719951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.626 [2024-11-20 06:43:29.719957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.626 [2024-11-20 06:43:29.731712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.626 [2024-11-20 06:43:29.732266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.626 [2024-11-20 06:43:29.732296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.626 [2024-11-20 06:43:29.732305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.626 [2024-11-20 06:43:29.732471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.626 [2024-11-20 06:43:29.732623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.626 [2024-11-20 06:43:29.732629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.626 [2024-11-20 06:43:29.732634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.626 [2024-11-20 06:43:29.732640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.626 [2024-11-20 06:43:29.744371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.626 [2024-11-20 06:43:29.744860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.626 [2024-11-20 06:43:29.744875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.626 [2024-11-20 06:43:29.744881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.626 [2024-11-20 06:43:29.745030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.626 [2024-11-20 06:43:29.745183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.626 [2024-11-20 06:43:29.745190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.626 [2024-11-20 06:43:29.745195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.626 [2024-11-20 06:43:29.745203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.626 [2024-11-20 06:43:29.757055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.626 [2024-11-20 06:43:29.757606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.626 [2024-11-20 06:43:29.757636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.626 [2024-11-20 06:43:29.757645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.626 [2024-11-20 06:43:29.757809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.626 [2024-11-20 06:43:29.757960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.626 [2024-11-20 06:43:29.757967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.626 [2024-11-20 06:43:29.757972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.626 [2024-11-20 06:43:29.757978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.626 [2024-11-20 06:43:29.769740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.626 [2024-11-20 06:43:29.770339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.626 [2024-11-20 06:43:29.770370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.626 [2024-11-20 06:43:29.770378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.626 [2024-11-20 06:43:29.770542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.626 [2024-11-20 06:43:29.770694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.626 [2024-11-20 06:43:29.770700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.626 [2024-11-20 06:43:29.770706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.626 [2024-11-20 06:43:29.770711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.626 [2024-11-20 06:43:29.782324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.626 [2024-11-20 06:43:29.782869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.626 [2024-11-20 06:43:29.782900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.626 [2024-11-20 06:43:29.782908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.626 [2024-11-20 06:43:29.783072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.626 [2024-11-20 06:43:29.783231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.626 [2024-11-20 06:43:29.783239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.626 [2024-11-20 06:43:29.783244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.626 [2024-11-20 06:43:29.783250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.626 [2024-11-20 06:43:29.794982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.626 [2024-11-20 06:43:29.795575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.626 [2024-11-20 06:43:29.795608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.626 [2024-11-20 06:43:29.795616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.626 [2024-11-20 06:43:29.795781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.626 [2024-11-20 06:43:29.795932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.626 [2024-11-20 06:43:29.795938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.626 [2024-11-20 06:43:29.795944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.626 [2024-11-20 06:43:29.795949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.626 [2024-11-20 06:43:29.807693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.626 [2024-11-20 06:43:29.808190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.626 [2024-11-20 06:43:29.808205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.626 [2024-11-20 06:43:29.808211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.626 [2024-11-20 06:43:29.808360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.626 [2024-11-20 06:43:29.808509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.626 [2024-11-20 06:43:29.808514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.626 [2024-11-20 06:43:29.808519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.626 [2024-11-20 06:43:29.808524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.626 [2024-11-20 06:43:29.820397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.626 [2024-11-20 06:43:29.820967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.626 [2024-11-20 06:43:29.820997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.626 [2024-11-20 06:43:29.821006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.626 [2024-11-20 06:43:29.821177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.626 [2024-11-20 06:43:29.821330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.627 [2024-11-20 06:43:29.821336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.627 [2024-11-20 06:43:29.821341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.627 [2024-11-20 06:43:29.821347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.627 [2024-11-20 06:43:29.833080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.627 [2024-11-20 06:43:29.833635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.627 [2024-11-20 06:43:29.833666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.627 [2024-11-20 06:43:29.833675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.627 [2024-11-20 06:43:29.833844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.627 [2024-11-20 06:43:29.833996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.627 [2024-11-20 06:43:29.834002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.627 [2024-11-20 06:43:29.834008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.627 [2024-11-20 06:43:29.834013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.627 [2024-11-20 06:43:29.845756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.627 [2024-11-20 06:43:29.846297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.627 [2024-11-20 06:43:29.846328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.627 [2024-11-20 06:43:29.846336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.627 [2024-11-20 06:43:29.846502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.627 [2024-11-20 06:43:29.846654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.627 [2024-11-20 06:43:29.846660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.627 [2024-11-20 06:43:29.846666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.627 [2024-11-20 06:43:29.846672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.627 [2024-11-20 06:43:29.858418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.627 [2024-11-20 06:43:29.858865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.627 [2024-11-20 06:43:29.858895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.627 [2024-11-20 06:43:29.858903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.627 [2024-11-20 06:43:29.859068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.627 [2024-11-20 06:43:29.859225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.627 [2024-11-20 06:43:29.859232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.627 [2024-11-20 06:43:29.859237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.627 [2024-11-20 06:43:29.859243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.627 [2024-11-20 06:43:29.871127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.627 [2024-11-20 06:43:29.871698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.627 [2024-11-20 06:43:29.871729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.627 [2024-11-20 06:43:29.871738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.627 [2024-11-20 06:43:29.871902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.627 [2024-11-20 06:43:29.872053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.627 [2024-11-20 06:43:29.872063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.627 [2024-11-20 06:43:29.872068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.627 [2024-11-20 06:43:29.872074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.627 [2024-11-20 06:43:29.883828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.627 [2024-11-20 06:43:29.884341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.627 [2024-11-20 06:43:29.884371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.627 [2024-11-20 06:43:29.884379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.627 [2024-11-20 06:43:29.884544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.627 [2024-11-20 06:43:29.884695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.627 [2024-11-20 06:43:29.884701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.627 [2024-11-20 06:43:29.884706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.627 [2024-11-20 06:43:29.884712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.627 [2024-11-20 06:43:29.896460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.627 [2024-11-20 06:43:29.897032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.627 [2024-11-20 06:43:29.897063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.627 [2024-11-20 06:43:29.897072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.627 [2024-11-20 06:43:29.897244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.627 [2024-11-20 06:43:29.897397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.627 [2024-11-20 06:43:29.897403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.627 [2024-11-20 06:43:29.897408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.627 [2024-11-20 06:43:29.897413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.891 6572.50 IOPS, 25.67 MiB/s [2024-11-20T05:43:30.170Z] [2024-11-20 06:43:29.909866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.891 [2024-11-20 06:43:29.910434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.891 [2024-11-20 06:43:29.910464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.891 [2024-11-20 06:43:29.910473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.891 [2024-11-20 06:43:29.910637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.891 [2024-11-20 06:43:29.910789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.891 [2024-11-20 06:43:29.910795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.891 [2024-11-20 06:43:29.910800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.891 [2024-11-20 06:43:29.910810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.891 [2024-11-20 06:43:29.922554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.891 [2024-11-20 06:43:29.923047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.891 [2024-11-20 06:43:29.923061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.891 [2024-11-20 06:43:29.923067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.891 [2024-11-20 06:43:29.923221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.891 [2024-11-20 06:43:29.923370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.891 [2024-11-20 06:43:29.923376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.891 [2024-11-20 06:43:29.923381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.891 [2024-11-20 06:43:29.923386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.891 [2024-11-20 06:43:29.935257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.891 [2024-11-20 06:43:29.935733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.891 [2024-11-20 06:43:29.935745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.891 [2024-11-20 06:43:29.935751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.891 [2024-11-20 06:43:29.935899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.891 [2024-11-20 06:43:29.936048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.891 [2024-11-20 06:43:29.936053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.891 [2024-11-20 06:43:29.936058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.891 [2024-11-20 06:43:29.936063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.891 [2024-11-20 06:43:29.947939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.891 [2024-11-20 06:43:29.948471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.891 [2024-11-20 06:43:29.948502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.891 [2024-11-20 06:43:29.948510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.891 [2024-11-20 06:43:29.948675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.891 [2024-11-20 06:43:29.948827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.891 [2024-11-20 06:43:29.948833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.891 [2024-11-20 06:43:29.948838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.891 [2024-11-20 06:43:29.948844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.891 [2024-11-20 06:43:29.960608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.891 [2024-11-20 06:43:29.961119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.891 [2024-11-20 06:43:29.961153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.891 [2024-11-20 06:43:29.961168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.891 [2024-11-20 06:43:29.961335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.891 [2024-11-20 06:43:29.961486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.891 [2024-11-20 06:43:29.961492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.891 [2024-11-20 06:43:29.961497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.891 [2024-11-20 06:43:29.961503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.891 [2024-11-20 06:43:29.973257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.891 [2024-11-20 06:43:29.973831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.891 [2024-11-20 06:43:29.973862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.891 [2024-11-20 06:43:29.973870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.891 [2024-11-20 06:43:29.974034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.891 [2024-11-20 06:43:29.974192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.891 [2024-11-20 06:43:29.974199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.891 [2024-11-20 06:43:29.974204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.891 [2024-11-20 06:43:29.974210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.891 [2024-11-20 06:43:29.985840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.891 [2024-11-20 06:43:29.986293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.891 [2024-11-20 06:43:29.986308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.891 [2024-11-20 06:43:29.986314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.891 [2024-11-20 06:43:29.986463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.891 [2024-11-20 06:43:29.986614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.891 [2024-11-20 06:43:29.986620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.891 [2024-11-20 06:43:29.986626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.891 [2024-11-20 06:43:29.986631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.891 [2024-11-20 06:43:29.998528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.891 [2024-11-20 06:43:29.998991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.891 [2024-11-20 06:43:29.999004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.891 [2024-11-20 06:43:29.999009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.892 [2024-11-20 06:43:29.999167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.892 [2024-11-20 06:43:29.999317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.892 [2024-11-20 06:43:29.999323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.892 [2024-11-20 06:43:29.999328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.892 [2024-11-20 06:43:29.999333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.892 [2024-11-20 06:43:30.011241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.892 [2024-11-20 06:43:30.011739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.892 [2024-11-20 06:43:30.011753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.892 [2024-11-20 06:43:30.011758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.892 [2024-11-20 06:43:30.011907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.892 [2024-11-20 06:43:30.012055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.892 [2024-11-20 06:43:30.012061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.892 [2024-11-20 06:43:30.012066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.892 [2024-11-20 06:43:30.012071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.892 [2024-11-20 06:43:30.023829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.892 [2024-11-20 06:43:30.024163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.892 [2024-11-20 06:43:30.024178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.892 [2024-11-20 06:43:30.024184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.892 [2024-11-20 06:43:30.024333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.892 [2024-11-20 06:43:30.024483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.892 [2024-11-20 06:43:30.024488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.892 [2024-11-20 06:43:30.024494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.892 [2024-11-20 06:43:30.024498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.892 [2024-11-20 06:43:30.036539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.892 [2024-11-20 06:43:30.037021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.892 [2024-11-20 06:43:30.037034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.892 [2024-11-20 06:43:30.037039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.892 [2024-11-20 06:43:30.037194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.892 [2024-11-20 06:43:30.037344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.892 [2024-11-20 06:43:30.037357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.892 [2024-11-20 06:43:30.037362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.892 [2024-11-20 06:43:30.037367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.892 [2024-11-20 06:43:30.049120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.892 [2024-11-20 06:43:30.049460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.892 [2024-11-20 06:43:30.049474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.892 [2024-11-20 06:43:30.049482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.892 [2024-11-20 06:43:30.049633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.892 [2024-11-20 06:43:30.049783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.892 [2024-11-20 06:43:30.049790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.892 [2024-11-20 06:43:30.049797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.892 [2024-11-20 06:43:30.049803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.892 [2024-11-20 06:43:30.061702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.892 [2024-11-20 06:43:30.062143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.892 [2024-11-20 06:43:30.062156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.892 [2024-11-20 06:43:30.062167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.892 [2024-11-20 06:43:30.062316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.892 [2024-11-20 06:43:30.062464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.892 [2024-11-20 06:43:30.062470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.892 [2024-11-20 06:43:30.062475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.892 [2024-11-20 06:43:30.062480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.892 [2024-11-20 06:43:30.074389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.892 [2024-11-20 06:43:30.074877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.892 [2024-11-20 06:43:30.074907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.892 [2024-11-20 06:43:30.074916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.892 [2024-11-20 06:43:30.075082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.892 [2024-11-20 06:43:30.075248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.892 [2024-11-20 06:43:30.075256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.892 [2024-11-20 06:43:30.075261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.892 [2024-11-20 06:43:30.075267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.892 [2024-11-20 06:43:30.087020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.892 [2024-11-20 06:43:30.087585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.892 [2024-11-20 06:43:30.087600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.892 [2024-11-20 06:43:30.087606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.892 [2024-11-20 06:43:30.087756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.892 [2024-11-20 06:43:30.087905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.892 [2024-11-20 06:43:30.087911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.892 [2024-11-20 06:43:30.087917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.892 [2024-11-20 06:43:30.087922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.892 [2024-11-20 06:43:30.099666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.892 [2024-11-20 06:43:30.100114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.892 [2024-11-20 06:43:30.100127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.892 [2024-11-20 06:43:30.100132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.892 [2024-11-20 06:43:30.100285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.892 [2024-11-20 06:43:30.100435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.892 [2024-11-20 06:43:30.100442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.893 [2024-11-20 06:43:30.100448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.893 [2024-11-20 06:43:30.100453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.893 [2024-11-20 06:43:30.112348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.893 [2024-11-20 06:43:30.112829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.893 [2024-11-20 06:43:30.112842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.893 [2024-11-20 06:43:30.112847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.893 [2024-11-20 06:43:30.112996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.893 [2024-11-20 06:43:30.113145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.893 [2024-11-20 06:43:30.113151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.893 [2024-11-20 06:43:30.113156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.893 [2024-11-20 06:43:30.113168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.893 [2024-11-20 06:43:30.125048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.893 [2024-11-20 06:43:30.125498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.893 [2024-11-20 06:43:30.125514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.893 [2024-11-20 06:43:30.125519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.893 [2024-11-20 06:43:30.125668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.893 [2024-11-20 06:43:30.125817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.893 [2024-11-20 06:43:30.125823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.893 [2024-11-20 06:43:30.125828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.893 [2024-11-20 06:43:30.125833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.893 [2024-11-20 06:43:30.137711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.893 [2024-11-20 06:43:30.138176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.893 [2024-11-20 06:43:30.138189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.893 [2024-11-20 06:43:30.138194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.893 [2024-11-20 06:43:30.138343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.893 [2024-11-20 06:43:30.138491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.893 [2024-11-20 06:43:30.138497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.893 [2024-11-20 06:43:30.138502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.893 [2024-11-20 06:43:30.138506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.893 [2024-11-20 06:43:30.150417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.893 [2024-11-20 06:43:30.150993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.893 [2024-11-20 06:43:30.151023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.893 [2024-11-20 06:43:30.151032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.893 [2024-11-20 06:43:30.151206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.893 [2024-11-20 06:43:30.151358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.893 [2024-11-20 06:43:30.151365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.893 [2024-11-20 06:43:30.151370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.893 [2024-11-20 06:43:30.151375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:09.893 [2024-11-20 06:43:30.162993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:09.893 [2024-11-20 06:43:30.163385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.893 [2024-11-20 06:43:30.163400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:09.893 [2024-11-20 06:43:30.163406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:09.893 [2024-11-20 06:43:30.163559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:09.893 [2024-11-20 06:43:30.163708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:09.893 [2024-11-20 06:43:30.163714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:09.893 [2024-11-20 06:43:30.163719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:09.893 [2024-11-20 06:43:30.163724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.155 [2024-11-20 06:43:30.175646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.155 [2024-11-20 06:43:30.176239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.155 [2024-11-20 06:43:30.176270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.155 [2024-11-20 06:43:30.176278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.155 [2024-11-20 06:43:30.176445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.155 [2024-11-20 06:43:30.176597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.155 [2024-11-20 06:43:30.176603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.155 [2024-11-20 06:43:30.176608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.155 [2024-11-20 06:43:30.176614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.155 [2024-11-20 06:43:30.188229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.155 [2024-11-20 06:43:30.188717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.155 [2024-11-20 06:43:30.188732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.155 [2024-11-20 06:43:30.188737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.155 [2024-11-20 06:43:30.188886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.155 [2024-11-20 06:43:30.189035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.155 [2024-11-20 06:43:30.189041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.155 [2024-11-20 06:43:30.189046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.155 [2024-11-20 06:43:30.189051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.155 [2024-11-20 06:43:30.200929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.155 [2024-11-20 06:43:30.201368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.155 [2024-11-20 06:43:30.201382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.156 [2024-11-20 06:43:30.201387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.156 [2024-11-20 06:43:30.201536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.156 [2024-11-20 06:43:30.201685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.156 [2024-11-20 06:43:30.201691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.156 [2024-11-20 06:43:30.201700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.156 [2024-11-20 06:43:30.201705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.156 [2024-11-20 06:43:30.213582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.156 [2024-11-20 06:43:30.213920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.156 [2024-11-20 06:43:30.213933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.156 [2024-11-20 06:43:30.213938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.156 [2024-11-20 06:43:30.214087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.156 [2024-11-20 06:43:30.214240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.156 [2024-11-20 06:43:30.214246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.156 [2024-11-20 06:43:30.214251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.156 [2024-11-20 06:43:30.214256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.156 [2024-11-20 06:43:30.226281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.156 [2024-11-20 06:43:30.226761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.156 [2024-11-20 06:43:30.226773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.156 [2024-11-20 06:43:30.226779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.156 [2024-11-20 06:43:30.226927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.156 [2024-11-20 06:43:30.227076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.156 [2024-11-20 06:43:30.227081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.156 [2024-11-20 06:43:30.227087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.156 [2024-11-20 06:43:30.227091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.156 [2024-11-20 06:43:30.238973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.156 [2024-11-20 06:43:30.239517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.156 [2024-11-20 06:43:30.239548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.156 [2024-11-20 06:43:30.239557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.156 [2024-11-20 06:43:30.239721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.156 [2024-11-20 06:43:30.239872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.156 [2024-11-20 06:43:30.239878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.156 [2024-11-20 06:43:30.239883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.156 [2024-11-20 06:43:30.239888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.156 [2024-11-20 06:43:30.251631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.156 [2024-11-20 06:43:30.252013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.156 [2024-11-20 06:43:30.252028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.156 [2024-11-20 06:43:30.252033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.156 [2024-11-20 06:43:30.252187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.156 [2024-11-20 06:43:30.252337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.156 [2024-11-20 06:43:30.252343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.156 [2024-11-20 06:43:30.252348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.156 [2024-11-20 06:43:30.252353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.156 [2024-11-20 06:43:30.264231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.156 [2024-11-20 06:43:30.264677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.156 [2024-11-20 06:43:30.264708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.156 [2024-11-20 06:43:30.264716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.156 [2024-11-20 06:43:30.264882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.156 [2024-11-20 06:43:30.265033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.156 [2024-11-20 06:43:30.265039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.156 [2024-11-20 06:43:30.265045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.156 [2024-11-20 06:43:30.265050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.156 [2024-11-20 06:43:30.276815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.156 [2024-11-20 06:43:30.277424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.156 [2024-11-20 06:43:30.277454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.156 [2024-11-20 06:43:30.277463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.156 [2024-11-20 06:43:30.277627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.156 [2024-11-20 06:43:30.277778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.156 [2024-11-20 06:43:30.277784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.156 [2024-11-20 06:43:30.277790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.156 [2024-11-20 06:43:30.277796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.156 [2024-11-20 06:43:30.289394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.156 [2024-11-20 06:43:30.290004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.156 [2024-11-20 06:43:30.290035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.156 [2024-11-20 06:43:30.290047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.156 [2024-11-20 06:43:30.290218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.156 [2024-11-20 06:43:30.290371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.156 [2024-11-20 06:43:30.290377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.156 [2024-11-20 06:43:30.290382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.156 [2024-11-20 06:43:30.290388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.156 [2024-11-20 06:43:30.301986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.156 [2024-11-20 06:43:30.302594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.156 [2024-11-20 06:43:30.302625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.156 [2024-11-20 06:43:30.302634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.156 [2024-11-20 06:43:30.302798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.156 [2024-11-20 06:43:30.302949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.156 [2024-11-20 06:43:30.302956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.156 [2024-11-20 06:43:30.302961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.156 [2024-11-20 06:43:30.302967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.157 [2024-11-20 06:43:30.314574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.157 [2024-11-20 06:43:30.315024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.157 [2024-11-20 06:43:30.315039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.157 [2024-11-20 06:43:30.315044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.157 [2024-11-20 06:43:30.315198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.157 [2024-11-20 06:43:30.315347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.157 [2024-11-20 06:43:30.315353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.157 [2024-11-20 06:43:30.315358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.157 [2024-11-20 06:43:30.315363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.157 [2024-11-20 06:43:30.327240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.157 [2024-11-20 06:43:30.327710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.157 [2024-11-20 06:43:30.327723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.157 [2024-11-20 06:43:30.327728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.157 [2024-11-20 06:43:30.327876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.157 [2024-11-20 06:43:30.328029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.157 [2024-11-20 06:43:30.328034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.157 [2024-11-20 06:43:30.328039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.157 [2024-11-20 06:43:30.328044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.157 [2024-11-20 06:43:30.339833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.157 [2024-11-20 06:43:30.340476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.157 [2024-11-20 06:43:30.340506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.157 [2024-11-20 06:43:30.340515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.157 [2024-11-20 06:43:30.340679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.157 [2024-11-20 06:43:30.340830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.157 [2024-11-20 06:43:30.340836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.157 [2024-11-20 06:43:30.340842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.157 [2024-11-20 06:43:30.340848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.157 [2024-11-20 06:43:30.352459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.157 [2024-11-20 06:43:30.353031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.157 [2024-11-20 06:43:30.353061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.157 [2024-11-20 06:43:30.353069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.157 [2024-11-20 06:43:30.353240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.157 [2024-11-20 06:43:30.353393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.157 [2024-11-20 06:43:30.353400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.157 [2024-11-20 06:43:30.353406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.157 [2024-11-20 06:43:30.353411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.157 [2024-11-20 06:43:30.365165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.157 [2024-11-20 06:43:30.365737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.157 [2024-11-20 06:43:30.365768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.157 [2024-11-20 06:43:30.365776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.157 [2024-11-20 06:43:30.365940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.157 [2024-11-20 06:43:30.366092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.157 [2024-11-20 06:43:30.366098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.157 [2024-11-20 06:43:30.366106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.157 [2024-11-20 06:43:30.366112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.157 [2024-11-20 06:43:30.377879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.157 [2024-11-20 06:43:30.378156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.157 [2024-11-20 06:43:30.378176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.157 [2024-11-20 06:43:30.378182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.157 [2024-11-20 06:43:30.378331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.157 [2024-11-20 06:43:30.378480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.157 [2024-11-20 06:43:30.378486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.157 [2024-11-20 06:43:30.378491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.157 [2024-11-20 06:43:30.378496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.157 [2024-11-20 06:43:30.390517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.157 [2024-11-20 06:43:30.390883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.157 [2024-11-20 06:43:30.390914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.157 [2024-11-20 06:43:30.390922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.157 [2024-11-20 06:43:30.391089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.157 [2024-11-20 06:43:30.391245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.157 [2024-11-20 06:43:30.391252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.157 [2024-11-20 06:43:30.391257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.157 [2024-11-20 06:43:30.391263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.157 [2024-11-20 06:43:30.403224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.157 [2024-11-20 06:43:30.403790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.157 [2024-11-20 06:43:30.403820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.157 [2024-11-20 06:43:30.403829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.157 [2024-11-20 06:43:30.403993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.157 [2024-11-20 06:43:30.404145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.157 [2024-11-20 06:43:30.404151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.157 [2024-11-20 06:43:30.404156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.157 [2024-11-20 06:43:30.404170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.157 [2024-11-20 06:43:30.415919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.158 [2024-11-20 06:43:30.416413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.158 [2024-11-20 06:43:30.416429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.158 [2024-11-20 06:43:30.416434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.158 [2024-11-20 06:43:30.416584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.158 [2024-11-20 06:43:30.416733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.158 [2024-11-20 06:43:30.416738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.158 [2024-11-20 06:43:30.416743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.158 [2024-11-20 06:43:30.416748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.158 [2024-11-20 06:43:30.428627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.158 [2024-11-20 06:43:30.429195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.158 [2024-11-20 06:43:30.429226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.158 [2024-11-20 06:43:30.429235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.158 [2024-11-20 06:43:30.429401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.158 [2024-11-20 06:43:30.429553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.158 [2024-11-20 06:43:30.429559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.158 [2024-11-20 06:43:30.429564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.158 [2024-11-20 06:43:30.429570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.420 [2024-11-20 06:43:30.441315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.420 [2024-11-20 06:43:30.441890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.420 [2024-11-20 06:43:30.441921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.420 [2024-11-20 06:43:30.441930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.420 [2024-11-20 06:43:30.442094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.420 [2024-11-20 06:43:30.442252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.420 [2024-11-20 06:43:30.442258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.420 [2024-11-20 06:43:30.442264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.420 [2024-11-20 06:43:30.442270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.420 [2024-11-20 06:43:30.454002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.420 [2024-11-20 06:43:30.454485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.420 [2024-11-20 06:43:30.454500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.420 [2024-11-20 06:43:30.454510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.421 [2024-11-20 06:43:30.454659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.421 [2024-11-20 06:43:30.454808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.421 [2024-11-20 06:43:30.454813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.421 [2024-11-20 06:43:30.454818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.421 [2024-11-20 06:43:30.454823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.421 [2024-11-20 06:43:30.466682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.421 [2024-11-20 06:43:30.467167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.421 [2024-11-20 06:43:30.467181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.421 [2024-11-20 06:43:30.467186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.421 [2024-11-20 06:43:30.467335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.421 [2024-11-20 06:43:30.467484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.421 [2024-11-20 06:43:30.467489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.421 [2024-11-20 06:43:30.467494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.421 [2024-11-20 06:43:30.467498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.421 [2024-11-20 06:43:30.479382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.421 [2024-11-20 06:43:30.479870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.421 [2024-11-20 06:43:30.479883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.421 [2024-11-20 06:43:30.479888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.421 [2024-11-20 06:43:30.480037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.421 [2024-11-20 06:43:30.480192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.421 [2024-11-20 06:43:30.480199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.421 [2024-11-20 06:43:30.480204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.421 [2024-11-20 06:43:30.480208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.421 [2024-11-20 06:43:30.492064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.421 [2024-11-20 06:43:30.492544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.421 [2024-11-20 06:43:30.492574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.421 [2024-11-20 06:43:30.492583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.421 [2024-11-20 06:43:30.492750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.421 [2024-11-20 06:43:30.492905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.421 [2024-11-20 06:43:30.492911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.421 [2024-11-20 06:43:30.492916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.421 [2024-11-20 06:43:30.492922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.421 [2024-11-20 06:43:30.504665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.421 [2024-11-20 06:43:30.505150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.421 [2024-11-20 06:43:30.505170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.421 [2024-11-20 06:43:30.505176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.421 [2024-11-20 06:43:30.505324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.421 [2024-11-20 06:43:30.505473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.421 [2024-11-20 06:43:30.505479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.421 [2024-11-20 06:43:30.505484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.421 [2024-11-20 06:43:30.505488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.421 [2024-11-20 06:43:30.517366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.421 [2024-11-20 06:43:30.517936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.421 [2024-11-20 06:43:30.517967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.421 [2024-11-20 06:43:30.517975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.421 [2024-11-20 06:43:30.518141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.421 [2024-11-20 06:43:30.518299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.421 [2024-11-20 06:43:30.518306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.421 [2024-11-20 06:43:30.518312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.421 [2024-11-20 06:43:30.518317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.421 [2024-11-20 06:43:30.530061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.421 [2024-11-20 06:43:30.530537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.421 [2024-11-20 06:43:30.530567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.421 [2024-11-20 06:43:30.530576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.421 [2024-11-20 06:43:30.530744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.421 [2024-11-20 06:43:30.530895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.421 [2024-11-20 06:43:30.530902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.421 [2024-11-20 06:43:30.530911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.421 [2024-11-20 06:43:30.530917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.421 [2024-11-20 06:43:30.542660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.421 [2024-11-20 06:43:30.543110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.421 [2024-11-20 06:43:30.543126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.421 [2024-11-20 06:43:30.543131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.421 [2024-11-20 06:43:30.543286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.421 [2024-11-20 06:43:30.543436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.421 [2024-11-20 06:43:30.543442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.421 [2024-11-20 06:43:30.543447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.421 [2024-11-20 06:43:30.543451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.421 [2024-11-20 06:43:30.555328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.421 [2024-11-20 06:43:30.555916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.421 [2024-11-20 06:43:30.555946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.421 [2024-11-20 06:43:30.555955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.421 [2024-11-20 06:43:30.556119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.421 [2024-11-20 06:43:30.556277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.421 [2024-11-20 06:43:30.556284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.421 [2024-11-20 06:43:30.556290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.421 [2024-11-20 06:43:30.556296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.421 [2024-11-20 06:43:30.568040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.421 [2024-11-20 06:43:30.568605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.421 [2024-11-20 06:43:30.568636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.422 [2024-11-20 06:43:30.568645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.422 [2024-11-20 06:43:30.568811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.422 [2024-11-20 06:43:30.568962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.422 [2024-11-20 06:43:30.568969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.422 [2024-11-20 06:43:30.568974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.422 [2024-11-20 06:43:30.568980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.422 [2024-11-20 06:43:30.580739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.422 [2024-11-20 06:43:30.581399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.422 [2024-11-20 06:43:30.581430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.422 [2024-11-20 06:43:30.581438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.422 [2024-11-20 06:43:30.581605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.422 [2024-11-20 06:43:30.581756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.422 [2024-11-20 06:43:30.581762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.422 [2024-11-20 06:43:30.581768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.422 [2024-11-20 06:43:30.581773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.422 [2024-11-20 06:43:30.593370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.422 [2024-11-20 06:43:30.593949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.422 [2024-11-20 06:43:30.593980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.422 [2024-11-20 06:43:30.593988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.422 [2024-11-20 06:43:30.594155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.422 [2024-11-20 06:43:30.594314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.422 [2024-11-20 06:43:30.594320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.422 [2024-11-20 06:43:30.594325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.422 [2024-11-20 06:43:30.594331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.422 [2024-11-20 06:43:30.606068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.422 [2024-11-20 06:43:30.606568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.422 [2024-11-20 06:43:30.606583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.422 [2024-11-20 06:43:30.606589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.422 [2024-11-20 06:43:30.606738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.422 [2024-11-20 06:43:30.606886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.422 [2024-11-20 06:43:30.606892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.422 [2024-11-20 06:43:30.606897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.422 [2024-11-20 06:43:30.606902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.422 [2024-11-20 06:43:30.618769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.422 [2024-11-20 06:43:30.619350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.422 [2024-11-20 06:43:30.619381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.422 [2024-11-20 06:43:30.619393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.422 [2024-11-20 06:43:30.619560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.422 [2024-11-20 06:43:30.619712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.422 [2024-11-20 06:43:30.619717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.422 [2024-11-20 06:43:30.619723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.422 [2024-11-20 06:43:30.619729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.422 [2024-11-20 06:43:30.631472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.422 [2024-11-20 06:43:30.631968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.422 [2024-11-20 06:43:30.631983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.422 [2024-11-20 06:43:30.631989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.422 [2024-11-20 06:43:30.632138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.422 [2024-11-20 06:43:30.632291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.422 [2024-11-20 06:43:30.632298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.422 [2024-11-20 06:43:30.632303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.422 [2024-11-20 06:43:30.632308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.422 [2024-11-20 06:43:30.644049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.422 [2024-11-20 06:43:30.644566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.422 [2024-11-20 06:43:30.644597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.422 [2024-11-20 06:43:30.644606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.422 [2024-11-20 06:43:30.644770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.422 [2024-11-20 06:43:30.644921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.422 [2024-11-20 06:43:30.644927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.422 [2024-11-20 06:43:30.644932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.422 [2024-11-20 06:43:30.644938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.422 [2024-11-20 06:43:30.656737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.422 [2024-11-20 06:43:30.657363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.422 [2024-11-20 06:43:30.657394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.422 [2024-11-20 06:43:30.657402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.422 [2024-11-20 06:43:30.657566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.422 [2024-11-20 06:43:30.657724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.422 [2024-11-20 06:43:30.657730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.422 [2024-11-20 06:43:30.657735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.422 [2024-11-20 06:43:30.657741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.422 [2024-11-20 06:43:30.669359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.422 [2024-11-20 06:43:30.669888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.422 [2024-11-20 06:43:30.669903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.422 [2024-11-20 06:43:30.669909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.422 [2024-11-20 06:43:30.670058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.422 [2024-11-20 06:43:30.670211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.422 [2024-11-20 06:43:30.670217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.422 [2024-11-20 06:43:30.670222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.422 [2024-11-20 06:43:30.670227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.422 [2024-11-20 06:43:30.681980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.422 [2024-11-20 06:43:30.682451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.423 [2024-11-20 06:43:30.682464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.423 [2024-11-20 06:43:30.682470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.423 [2024-11-20 06:43:30.682619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.423 [2024-11-20 06:43:30.682767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.423 [2024-11-20 06:43:30.682773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.423 [2024-11-20 06:43:30.682778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.423 [2024-11-20 06:43:30.682783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.423 [2024-11-20 06:43:30.694669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.695130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.695168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.695178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.695343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.695494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.695501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.695510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.695516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.707266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.707829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.707860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.707869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.708033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.708191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.708197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.708203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.708209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.719955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.720508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.720539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.720548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.720712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.720864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.720869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.720875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.720880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.732633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.733089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.733103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.733109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.733262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.733411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.733417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.733422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.733427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.745303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.745794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.745807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.745812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.745961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.746109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.746115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.746120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.746124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.758007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.758495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.758507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.758512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.758661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.758809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.758814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.758819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.758824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.770695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.771251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.771282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.771291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.771456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.771608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.771613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.771619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.771624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.783403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.783985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.784015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.784027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.784197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.784350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.784356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.784361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.784367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.796110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.796626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.796641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.796647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.796797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.796945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.796951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.796956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.796960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.808701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.809243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.809274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.809282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.809449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.809601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.809607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.809612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.809618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.821361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.821773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.821804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.821812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.821977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.822128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.822138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.822143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.822149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.834040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.834661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.834692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.834700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.834864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.835015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.835022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.835027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.835033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.846639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.847188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.847218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.847226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.847392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.847544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.847550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.847555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.847561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.859309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.859798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.859828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.859837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.860001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.860152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.860165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.860171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.860181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.871924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.872490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.872520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.872529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.872696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.872855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.872862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.872868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.872874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.884624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.885217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.885248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.885257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.885423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.885575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.885580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.885586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.885591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 [2024-11-20 06:43:30.897340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.897807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.897838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.897846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.898010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.898169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.898176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.898181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.685 [2024-11-20 06:43:30.898187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.685 5258.00 IOPS, 20.54 MiB/s [2024-11-20T05:43:30.964Z] [2024-11-20 06:43:30.911064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.685 [2024-11-20 06:43:30.911636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.685 [2024-11-20 06:43:30.911666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.685 [2024-11-20 06:43:30.911675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.685 [2024-11-20 06:43:30.911839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.685 [2024-11-20 06:43:30.911991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.685 [2024-11-20 06:43:30.911997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.685 [2024-11-20 06:43:30.912002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.686 [2024-11-20 06:43:30.912008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.686 [2024-11-20 06:43:30.923754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.686 [2024-11-20 06:43:30.924247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.686 [2024-11-20 06:43:30.924278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.686 [2024-11-20 06:43:30.924287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.686 [2024-11-20 06:43:30.924451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.686 [2024-11-20 06:43:30.924602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.686 [2024-11-20 06:43:30.924608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.686 [2024-11-20 06:43:30.924614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.686 [2024-11-20 06:43:30.924619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.686 [2024-11-20 06:43:30.936367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.686 [2024-11-20 06:43:30.936949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.686 [2024-11-20 06:43:30.936980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.686 [2024-11-20 06:43:30.936988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.686 [2024-11-20 06:43:30.937152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.686 [2024-11-20 06:43:30.937312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.686 [2024-11-20 06:43:30.937318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.686 [2024-11-20 06:43:30.937324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.686 [2024-11-20 06:43:30.937329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.686 [2024-11-20 06:43:30.949074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.686 [2024-11-20 06:43:30.949630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.686 [2024-11-20 06:43:30.949661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.686 [2024-11-20 06:43:30.949675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.686 [2024-11-20 06:43:30.949839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.686 [2024-11-20 06:43:30.949991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.686 [2024-11-20 06:43:30.949997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.686 [2024-11-20 06:43:30.950002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.686 [2024-11-20 06:43:30.950008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.948 [2024-11-20 06:43:30.961755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.948 [2024-11-20 06:43:30.962365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.948 [2024-11-20 06:43:30.962396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.948 [2024-11-20 06:43:30.962405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.948 [2024-11-20 06:43:30.962569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.948 [2024-11-20 06:43:30.962720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.948 [2024-11-20 06:43:30.962726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.948 [2024-11-20 06:43:30.962732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.948 [2024-11-20 06:43:30.962737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.948 [2024-11-20 06:43:30.974350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.948 [2024-11-20 06:43:30.974923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.948 [2024-11-20 06:43:30.974953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.948 [2024-11-20 06:43:30.974962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.948 [2024-11-20 06:43:30.975126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.948 [2024-11-20 06:43:30.975285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.948 [2024-11-20 06:43:30.975292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.948 [2024-11-20 06:43:30.975298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.948 [2024-11-20 06:43:30.975304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.948 [2024-11-20 06:43:30.987053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.948 [2024-11-20 06:43:30.987510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.948 [2024-11-20 06:43:30.987539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.948 [2024-11-20 06:43:30.987548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.948 [2024-11-20 06:43:30.987713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.948 [2024-11-20 06:43:30.987865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.948 [2024-11-20 06:43:30.987875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.948 [2024-11-20 06:43:30.987880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.948 [2024-11-20 06:43:30.987886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.948 [2024-11-20 06:43:30.999628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.948 [2024-11-20 06:43:31.000186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.948 [2024-11-20 06:43:31.000217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.948 [2024-11-20 06:43:31.000225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.948 [2024-11-20 06:43:31.000392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.948 [2024-11-20 06:43:31.000544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.948 [2024-11-20 06:43:31.000550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.948 [2024-11-20 06:43:31.000555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.948 [2024-11-20 06:43:31.000561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.948 [2024-11-20 06:43:31.012299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.949 [2024-11-20 06:43:31.012827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.949 [2024-11-20 06:43:31.012857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.949 [2024-11-20 06:43:31.012866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.949 [2024-11-20 06:43:31.013030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.949 [2024-11-20 06:43:31.013187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.949 [2024-11-20 06:43:31.013193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.949 [2024-11-20 06:43:31.013199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.949 [2024-11-20 06:43:31.013205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.949 [2024-11-20 06:43:31.024948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.949 [2024-11-20 06:43:31.025423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.949 [2024-11-20 06:43:31.025439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.949 [2024-11-20 06:43:31.025444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.949 [2024-11-20 06:43:31.025593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.949 [2024-11-20 06:43:31.025742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.949 [2024-11-20 06:43:31.025748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.949 [2024-11-20 06:43:31.025753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.949 [2024-11-20 06:43:31.025761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.949 [2024-11-20 06:43:31.037639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.949 [2024-11-20 06:43:31.038123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.949 [2024-11-20 06:43:31.038135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.949 [2024-11-20 06:43:31.038140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.949 [2024-11-20 06:43:31.038293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.949 [2024-11-20 06:43:31.038442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.949 [2024-11-20 06:43:31.038447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.949 [2024-11-20 06:43:31.038453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.949 [2024-11-20 06:43:31.038457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.949 [2024-11-20 06:43:31.050332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.949 [2024-11-20 06:43:31.050817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.949 [2024-11-20 06:43:31.050829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.949 [2024-11-20 06:43:31.050834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.949 [2024-11-20 06:43:31.050982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.949 [2024-11-20 06:43:31.051130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.949 [2024-11-20 06:43:31.051136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.949 [2024-11-20 06:43:31.051141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.949 [2024-11-20 06:43:31.051145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.949 [2024-11-20 06:43:31.063023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.949 [2024-11-20 06:43:31.063590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.949 [2024-11-20 06:43:31.063621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.949 [2024-11-20 06:43:31.063629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.949 [2024-11-20 06:43:31.063796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.949 [2024-11-20 06:43:31.063947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.949 [2024-11-20 06:43:31.063953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.949 [2024-11-20 06:43:31.063959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.949 [2024-11-20 06:43:31.063964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.949 [2024-11-20 06:43:31.075703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.949 [2024-11-20 06:43:31.076259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.949 [2024-11-20 06:43:31.076289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.949 [2024-11-20 06:43:31.076298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.949 [2024-11-20 06:43:31.076462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.949 [2024-11-20 06:43:31.076613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.949 [2024-11-20 06:43:31.076619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.949 [2024-11-20 06:43:31.076625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.949 [2024-11-20 06:43:31.076631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.949 [2024-11-20 06:43:31.088387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.949 [2024-11-20 06:43:31.088930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.949 [2024-11-20 06:43:31.088960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.949 [2024-11-20 06:43:31.088968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.949 [2024-11-20 06:43:31.089132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.949 [2024-11-20 06:43:31.089289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.949 [2024-11-20 06:43:31.089296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.949 [2024-11-20 06:43:31.089302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.949 [2024-11-20 06:43:31.089308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.949 [2024-11-20 06:43:31.101050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.949 [2024-11-20 06:43:31.101642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.949 [2024-11-20 06:43:31.101673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.949 [2024-11-20 06:43:31.101682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.949 [2024-11-20 06:43:31.101846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.949 [2024-11-20 06:43:31.101997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.949 [2024-11-20 06:43:31.102004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.949 [2024-11-20 06:43:31.102009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.949 [2024-11-20 06:43:31.102015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.949 [2024-11-20 06:43:31.113761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.949 [2024-11-20 06:43:31.114260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.949 [2024-11-20 06:43:31.114291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.949 [2024-11-20 06:43:31.114299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.949 [2024-11-20 06:43:31.114469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.949 [2024-11-20 06:43:31.114621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.949 [2024-11-20 06:43:31.114627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.949 [2024-11-20 06:43:31.114632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.949 [2024-11-20 06:43:31.114638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.949 [2024-11-20 06:43:31.126388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.949 [2024-11-20 06:43:31.126924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.949 [2024-11-20 06:43:31.126954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.949 [2024-11-20 06:43:31.126963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.949 [2024-11-20 06:43:31.127127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.949 [2024-11-20 06:43:31.127285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.949 [2024-11-20 06:43:31.127292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.949 [2024-11-20 06:43:31.127298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.949 [2024-11-20 06:43:31.127303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.950 [2024-11-20 06:43:31.139035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.950 [2024-11-20 06:43:31.139373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.950 [2024-11-20 06:43:31.139388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.950 [2024-11-20 06:43:31.139393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.950 [2024-11-20 06:43:31.139542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.950 [2024-11-20 06:43:31.139691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.950 [2024-11-20 06:43:31.139696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.950 [2024-11-20 06:43:31.139701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.950 [2024-11-20 06:43:31.139706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.950 [2024-11-20 06:43:31.151722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.950 [2024-11-20 06:43:31.152225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.950 [2024-11-20 06:43:31.152238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.950 [2024-11-20 06:43:31.152244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.950 [2024-11-20 06:43:31.152392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.950 [2024-11-20 06:43:31.152540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.950 [2024-11-20 06:43:31.152550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.950 [2024-11-20 06:43:31.152555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.950 [2024-11-20 06:43:31.152559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.950 [2024-11-20 06:43:31.164297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.950 [2024-11-20 06:43:31.164880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.950 [2024-11-20 06:43:31.164910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.950 [2024-11-20 06:43:31.164919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.950 [2024-11-20 06:43:31.165083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.950 [2024-11-20 06:43:31.165242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.950 [2024-11-20 06:43:31.165249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.950 [2024-11-20 06:43:31.165254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.950 [2024-11-20 06:43:31.165260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.950 [2024-11-20 06:43:31.176868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.950 [2024-11-20 06:43:31.177489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.950 [2024-11-20 06:43:31.177519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.950 [2024-11-20 06:43:31.177528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.950 [2024-11-20 06:43:31.177692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.950 [2024-11-20 06:43:31.177843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.950 [2024-11-20 06:43:31.177849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.950 [2024-11-20 06:43:31.177855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.950 [2024-11-20 06:43:31.177860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.950 [2024-11-20 06:43:31.189472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.950 [2024-11-20 06:43:31.190044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.950 [2024-11-20 06:43:31.190074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.950 [2024-11-20 06:43:31.190082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.950 [2024-11-20 06:43:31.190254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.950 [2024-11-20 06:43:31.190407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.950 [2024-11-20 06:43:31.190413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.950 [2024-11-20 06:43:31.190418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.950 [2024-11-20 06:43:31.190427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.950 [2024-11-20 06:43:31.202162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.950 [2024-11-20 06:43:31.202730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.950 [2024-11-20 06:43:31.202761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.950 [2024-11-20 06:43:31.202770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.950 [2024-11-20 06:43:31.202934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.950 [2024-11-20 06:43:31.203085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.950 [2024-11-20 06:43:31.203091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.950 [2024-11-20 06:43:31.203096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.950 [2024-11-20 06:43:31.203102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:10.950 [2024-11-20 06:43:31.214845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:10.950 [2024-11-20 06:43:31.215452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.950 [2024-11-20 06:43:31.215482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:10.950 [2024-11-20 06:43:31.215491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:10.950 [2024-11-20 06:43:31.215657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:10.950 [2024-11-20 06:43:31.215808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:10.950 [2024-11-20 06:43:31.215814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:10.950 [2024-11-20 06:43:31.215820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:10.950 [2024-11-20 06:43:31.215825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.213 [2024-11-20 06:43:31.227430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.213 [2024-11-20 06:43:31.228057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.213 [2024-11-20 06:43:31.228088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.213 [2024-11-20 06:43:31.228096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.213 [2024-11-20 06:43:31.228268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.213 [2024-11-20 06:43:31.228421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.213 [2024-11-20 06:43:31.228427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.213 [2024-11-20 06:43:31.228432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.213 [2024-11-20 06:43:31.228438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.213 [2024-11-20 06:43:31.240030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.213 [2024-11-20 06:43:31.240580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.213 [2024-11-20 06:43:31.240615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.213 [2024-11-20 06:43:31.240623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.213 [2024-11-20 06:43:31.240787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.213 [2024-11-20 06:43:31.240939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.213 [2024-11-20 06:43:31.240945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.213 [2024-11-20 06:43:31.240950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.213 [2024-11-20 06:43:31.240955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.213 [2024-11-20 06:43:31.252694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.213 [2024-11-20 06:43:31.253187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.213 [2024-11-20 06:43:31.253217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.213 [2024-11-20 06:43:31.253226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.213 [2024-11-20 06:43:31.253392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.213 [2024-11-20 06:43:31.253544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.213 [2024-11-20 06:43:31.253550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.213 [2024-11-20 06:43:31.253555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.213 [2024-11-20 06:43:31.253561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.213 [2024-11-20 06:43:31.265309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.213 [2024-11-20 06:43:31.265845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.213 [2024-11-20 06:43:31.265876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.213 [2024-11-20 06:43:31.265884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.213 [2024-11-20 06:43:31.266048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.213 [2024-11-20 06:43:31.266206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.213 [2024-11-20 06:43:31.266214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.213 [2024-11-20 06:43:31.266219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.213 [2024-11-20 06:43:31.266225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.213 [2024-11-20 06:43:31.277978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.213 [2024-11-20 06:43:31.278535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.213 [2024-11-20 06:43:31.278565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.213 [2024-11-20 06:43:31.278573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.213 [2024-11-20 06:43:31.278745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.213 [2024-11-20 06:43:31.278896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.213 [2024-11-20 06:43:31.278903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.213 [2024-11-20 06:43:31.278909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.213 [2024-11-20 06:43:31.278915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.213 [2024-11-20 06:43:31.290677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.213 [2024-11-20 06:43:31.291182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.213 [2024-11-20 06:43:31.291200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.213 [2024-11-20 06:43:31.291206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.213 [2024-11-20 06:43:31.291356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.213 [2024-11-20 06:43:31.291504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.213 [2024-11-20 06:43:31.291510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.213 [2024-11-20 06:43:31.291515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.213 [2024-11-20 06:43:31.291520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.213 [2024-11-20 06:43:31.303254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.213 [2024-11-20 06:43:31.303791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.213 [2024-11-20 06:43:31.303821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.213 [2024-11-20 06:43:31.303830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.213 [2024-11-20 06:43:31.303994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.213 [2024-11-20 06:43:31.304146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.213 [2024-11-20 06:43:31.304152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.213 [2024-11-20 06:43:31.304157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.213 [2024-11-20 06:43:31.304170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.213 [2024-11-20 06:43:31.315900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.213 [2024-11-20 06:43:31.316450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.213 [2024-11-20 06:43:31.316481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.213 [2024-11-20 06:43:31.316490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.213 [2024-11-20 06:43:31.316655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.213 [2024-11-20 06:43:31.316806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.213 [2024-11-20 06:43:31.316815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.213 [2024-11-20 06:43:31.316821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.213 [2024-11-20 06:43:31.316826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.213 [2024-11-20 06:43:31.328571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.213 [2024-11-20 06:43:31.329067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.213 [2024-11-20 06:43:31.329082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.213 [2024-11-20 06:43:31.329088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.213 [2024-11-20 06:43:31.329241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.213 [2024-11-20 06:43:31.329390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.213 [2024-11-20 06:43:31.329396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.214 [2024-11-20 06:43:31.329401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.214 [2024-11-20 06:43:31.329406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3020257 Killed "${NVMF_APP[@]}" "$@" 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3021974 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3021974 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3021974 ']' 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:11.214 [2024-11-20 06:43:31.341286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:11.214 06:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:11.214 [2024-11-20 06:43:31.341902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.214 [2024-11-20 06:43:31.341932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.214 [2024-11-20 06:43:31.341941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.214 [2024-11-20 06:43:31.342105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.214 [2024-11-20 06:43:31.342268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.214 [2024-11-20 06:43:31.342275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.214 [2024-11-20 06:43:31.342281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.214 [2024-11-20 06:43:31.342287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.214 [2024-11-20 06:43:31.353877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.214 [2024-11-20 06:43:31.354416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.214 [2024-11-20 06:43:31.354446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.214 [2024-11-20 06:43:31.354455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.214 [2024-11-20 06:43:31.354620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.214 [2024-11-20 06:43:31.354772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.214 [2024-11-20 06:43:31.354780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.214 [2024-11-20 06:43:31.354785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.214 [2024-11-20 06:43:31.354791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.214 [2024-11-20 06:43:31.366542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.214 [2024-11-20 06:43:31.367140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.214 [2024-11-20 06:43:31.367176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.214 [2024-11-20 06:43:31.367185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.214 [2024-11-20 06:43:31.367352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.214 [2024-11-20 06:43:31.367504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.214 [2024-11-20 06:43:31.367510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.214 [2024-11-20 06:43:31.367515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.214 [2024-11-20 06:43:31.367521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.214 [2024-11-20 06:43:31.379120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.214 [2024-11-20 06:43:31.379680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.214 [2024-11-20 06:43:31.379711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.214 [2024-11-20 06:43:31.379720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.214 [2024-11-20 06:43:31.379884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.214 [2024-11-20 06:43:31.380035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.214 [2024-11-20 06:43:31.380041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.214 [2024-11-20 06:43:31.380051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.214 [2024-11-20 06:43:31.380057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.214 [2024-11-20 06:43:31.391606] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:33:11.214 [2024-11-20 06:43:31.391651] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.214 [2024-11-20 06:43:31.391817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.214 [2024-11-20 06:43:31.392510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.214 [2024-11-20 06:43:31.392541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.214 [2024-11-20 06:43:31.392550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.214 [2024-11-20 06:43:31.392715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.214 [2024-11-20 06:43:31.392867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.214 [2024-11-20 06:43:31.392874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.214 [2024-11-20 06:43:31.392880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.214 [2024-11-20 06:43:31.392886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.214 [2024-11-20 06:43:31.404494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.214 [2024-11-20 06:43:31.404955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.214 [2024-11-20 06:43:31.404970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.214 [2024-11-20 06:43:31.404976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.214 [2024-11-20 06:43:31.405125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.214 [2024-11-20 06:43:31.405278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.214 [2024-11-20 06:43:31.405285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.214 [2024-11-20 06:43:31.405290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.214 [2024-11-20 06:43:31.405295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.214 [2024-11-20 06:43:31.417168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.214 [2024-11-20 06:43:31.417499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.214 [2024-11-20 06:43:31.417511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.214 [2024-11-20 06:43:31.417517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.214 [2024-11-20 06:43:31.417666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.214 [2024-11-20 06:43:31.417814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.214 [2024-11-20 06:43:31.417819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.214 [2024-11-20 06:43:31.417828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.214 [2024-11-20 06:43:31.417833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.214 [2024-11-20 06:43:31.429903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.214 [2024-11-20 06:43:31.430382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.214 [2024-11-20 06:43:31.430413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.214 [2024-11-20 06:43:31.430422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.214 [2024-11-20 06:43:31.430586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.214 [2024-11-20 06:43:31.430739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.214 [2024-11-20 06:43:31.430745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.215 [2024-11-20 06:43:31.430750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.215 [2024-11-20 06:43:31.430757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.215 [2024-11-20 06:43:31.442486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.215 [2024-11-20 06:43:31.442852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.215 [2024-11-20 06:43:31.442868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.215 [2024-11-20 06:43:31.442874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.215 [2024-11-20 06:43:31.443023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.215 [2024-11-20 06:43:31.443176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.215 [2024-11-20 06:43:31.443182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.215 [2024-11-20 06:43:31.443187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.215 [2024-11-20 06:43:31.443192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.215 [2024-11-20 06:43:31.455070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.215 [2024-11-20 06:43:31.455673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.215 [2024-11-20 06:43:31.455703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.215 [2024-11-20 06:43:31.455712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.215 [2024-11-20 06:43:31.455876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.215 [2024-11-20 06:43:31.456027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.215 [2024-11-20 06:43:31.456033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.215 [2024-11-20 06:43:31.456039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.215 [2024-11-20 06:43:31.456045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.215 [2024-11-20 06:43:31.467659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.215 [2024-11-20 06:43:31.468150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.215 [2024-11-20 06:43:31.468168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.215 [2024-11-20 06:43:31.468175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.215 [2024-11-20 06:43:31.468324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.215 [2024-11-20 06:43:31.468472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.215 [2024-11-20 06:43:31.468478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.215 [2024-11-20 06:43:31.468483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.215 [2024-11-20 06:43:31.468488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.215 [2024-11-20 06:43:31.480367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.215 [2024-11-20 06:43:31.480826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.215 [2024-11-20 06:43:31.480838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.215 [2024-11-20 06:43:31.480844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.215 [2024-11-20 06:43:31.480993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.215 [2024-11-20 06:43:31.481141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.215 [2024-11-20 06:43:31.481147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.215 [2024-11-20 06:43:31.481152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.215 [2024-11-20 06:43:31.481156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.215 [2024-11-20 06:43:31.481768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:11.477 [2024-11-20 06:43:31.492935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.477 [2024-11-20 06:43:31.493345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.477 [2024-11-20 06:43:31.493378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.477 [2024-11-20 06:43:31.493387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.477 [2024-11-20 06:43:31.493554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.477 [2024-11-20 06:43:31.493706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.477 [2024-11-20 06:43:31.493712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.477 [2024-11-20 06:43:31.493718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.477 [2024-11-20 06:43:31.493723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.477 [2024-11-20 06:43:31.505623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.477 [2024-11-20 06:43:31.506138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.477 [2024-11-20 06:43:31.506153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.477 [2024-11-20 06:43:31.506169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.477 [2024-11-20 06:43:31.506319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.477 [2024-11-20 06:43:31.506468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.477 [2024-11-20 06:43:31.506474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.477 [2024-11-20 06:43:31.506479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.477 [2024-11-20 06:43:31.506484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.477 [2024-11-20 06:43:31.511036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.477 [2024-11-20 06:43:31.511057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.477 [2024-11-20 06:43:31.511064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.477 [2024-11-20 06:43:31.511069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.477 [2024-11-20 06:43:31.511075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.477 [2024-11-20 06:43:31.512183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:11.477 [2024-11-20 06:43:31.512279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:11.477 [2024-11-20 06:43:31.512392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.477 [2024-11-20 06:43:31.518204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.477 [2024-11-20 06:43:31.518707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.477 [2024-11-20 06:43:31.518720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.477 [2024-11-20 06:43:31.518726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.477 [2024-11-20 06:43:31.518875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.477 [2024-11-20 06:43:31.519024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.477 [2024-11-20 06:43:31.519030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.477 [2024-11-20 06:43:31.519035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.477 [2024-11-20 06:43:31.519040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.477 [2024-11-20 06:43:31.530788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.477 [2024-11-20 06:43:31.531381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.477 [2024-11-20 06:43:31.531415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.477 [2024-11-20 06:43:31.531423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.477 [2024-11-20 06:43:31.531592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.477 [2024-11-20 06:43:31.531745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.477 [2024-11-20 06:43:31.531751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.477 [2024-11-20 06:43:31.531763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.477 [2024-11-20 06:43:31.531769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.477 [2024-11-20 06:43:31.543394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.477 [2024-11-20 06:43:31.543877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.477 [2024-11-20 06:43:31.543893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.477 [2024-11-20 06:43:31.543899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.477 [2024-11-20 06:43:31.544049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.477 [2024-11-20 06:43:31.544204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.477 [2024-11-20 06:43:31.544210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.477 [2024-11-20 06:43:31.544215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.477 [2024-11-20 06:43:31.544221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.477 [2024-11-20 06:43:31.556082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.477 [2024-11-20 06:43:31.556581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.477 [2024-11-20 06:43:31.556614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.477 [2024-11-20 06:43:31.556623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.477 [2024-11-20 06:43:31.556791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.477 [2024-11-20 06:43:31.556943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.477 [2024-11-20 06:43:31.556949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.477 [2024-11-20 06:43:31.556955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.477 [2024-11-20 06:43:31.556961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.477 [2024-11-20 06:43:31.568711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.477 [2024-11-20 06:43:31.569272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.477 [2024-11-20 06:43:31.569302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.477 [2024-11-20 06:43:31.569311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.477 [2024-11-20 06:43:31.569479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.477 [2024-11-20 06:43:31.569631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.477 [2024-11-20 06:43:31.569637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.477 [2024-11-20 06:43:31.569642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.477 [2024-11-20 06:43:31.569648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.477 [2024-11-20 06:43:31.581407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.477 [2024-11-20 06:43:31.582006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.477 [2024-11-20 06:43:31.582036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.477 [2024-11-20 06:43:31.582045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.477 [2024-11-20 06:43:31.582217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.477 [2024-11-20 06:43:31.582369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.477 [2024-11-20 06:43:31.582375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.477 [2024-11-20 06:43:31.582381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.477 [2024-11-20 06:43:31.582387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.477 [2024-11-20 06:43:31.593980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.477 [2024-11-20 06:43:31.594538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.477 [2024-11-20 06:43:31.594568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.594577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.594742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.594893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.594899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.594905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.594911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.606656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.607014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.607046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.607054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.607227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.607380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.607386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.607392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.607397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.619279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.619768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.619798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.619810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.619974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.620126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.620132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.620137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.620143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.631882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.632272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.632302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.632311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.632477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.632629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.632636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.632641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.632647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.644532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.645121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.645151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.645165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.645332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.645484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.645490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.645495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.645501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.657126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.657687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.657717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.657726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.657890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.658046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.658052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.658057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.658063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.669809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.670405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.670436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.670445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.670611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.670763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.670769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.670774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.670779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.682384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.682888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.682903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.682909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.683058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.683219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.683226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.683231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.683236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.694967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.695380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.695410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.695419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.695584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.695735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.695741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.695751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.695756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.707644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.708230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.708260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.708269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.708435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.708587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.708593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.708598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.708604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.720349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.720881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.720912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.720920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.721085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.721243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.721249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.721255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.721261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.732993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.733555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.733586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.733594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.733759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.733911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.733917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.733922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.733928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.478 [2024-11-20 06:43:31.745663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.478 [2024-11-20 06:43:31.746256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.478 [2024-11-20 06:43:31.746287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.478 [2024-11-20 06:43:31.746296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.478 [2024-11-20 06:43:31.746462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.478 [2024-11-20 06:43:31.746614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.478 [2024-11-20 06:43:31.746620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.478 [2024-11-20 06:43:31.746626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.478 [2024-11-20 06:43:31.746631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.740 [2024-11-20 06:43:31.758377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.740 [2024-11-20 06:43:31.758744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.740 [2024-11-20 06:43:31.758760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.740 [2024-11-20 06:43:31.758765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.740 [2024-11-20 06:43:31.758914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.740 [2024-11-20 06:43:31.759062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.740 [2024-11-20 06:43:31.759068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.740 [2024-11-20 06:43:31.759074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.740 [2024-11-20 06:43:31.759078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.740 [2024-11-20 06:43:31.770954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.740 [2024-11-20 06:43:31.771354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.740 [2024-11-20 06:43:31.771385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.740 [2024-11-20 06:43:31.771394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.740 [2024-11-20 06:43:31.771560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.740 [2024-11-20 06:43:31.771711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.740 [2024-11-20 06:43:31.771717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.740 [2024-11-20 06:43:31.771722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.740 [2024-11-20 06:43:31.771728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.740 [2024-11-20 06:43:31.783631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.740 [2024-11-20 06:43:31.783996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.740 [2024-11-20 06:43:31.784011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.740 [2024-11-20 06:43:31.784020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.740 [2024-11-20 06:43:31.784174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.740 [2024-11-20 06:43:31.784324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.740 [2024-11-20 06:43:31.784330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.740 [2024-11-20 06:43:31.784335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.740 [2024-11-20 06:43:31.784340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.740 [2024-11-20 06:43:31.796207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.740 [2024-11-20 06:43:31.796785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.740 [2024-11-20 06:43:31.796815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.740 [2024-11-20 06:43:31.796824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.740 [2024-11-20 06:43:31.796988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.740 [2024-11-20 06:43:31.797140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.740 [2024-11-20 06:43:31.797146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.740 [2024-11-20 06:43:31.797151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.740 [2024-11-20 06:43:31.797157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.740 [2024-11-20 06:43:31.808905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.740 [2024-11-20 06:43:31.809498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.740 [2024-11-20 06:43:31.809529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.740 [2024-11-20 06:43:31.809538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.740 [2024-11-20 06:43:31.809703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.740 [2024-11-20 06:43:31.809854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.740 [2024-11-20 06:43:31.809861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.740 [2024-11-20 06:43:31.809866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.740 [2024-11-20 06:43:31.809871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.740 [2024-11-20 06:43:31.821623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.740 [2024-11-20 06:43:31.822246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.740 [2024-11-20 06:43:31.822276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.740 [2024-11-20 06:43:31.822285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.740 [2024-11-20 06:43:31.822451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.740 [2024-11-20 06:43:31.822604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.740 [2024-11-20 06:43:31.822613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.740 [2024-11-20 06:43:31.822619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.740 [2024-11-20 06:43:31.822625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.740 [2024-11-20 06:43:31.834233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.740 [2024-11-20 06:43:31.834827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.740 [2024-11-20 06:43:31.834858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.740 [2024-11-20 06:43:31.834867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.740 [2024-11-20 06:43:31.835031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.740 [2024-11-20 06:43:31.835189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.740 [2024-11-20 06:43:31.835198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.740 [2024-11-20 06:43:31.835205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.740 [2024-11-20 06:43:31.835212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.740 [2024-11-20 06:43:31.846821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.740 [2024-11-20 06:43:31.847425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.740 [2024-11-20 06:43:31.847456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.740 [2024-11-20 06:43:31.847465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.740 [2024-11-20 06:43:31.847630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.740 [2024-11-20 06:43:31.847782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.740 [2024-11-20 06:43:31.847788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.740 [2024-11-20 06:43:31.847795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.740 [2024-11-20 06:43:31.847800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.740 [2024-11-20 06:43:31.859415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.740 [2024-11-20 06:43:31.860000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.740 [2024-11-20 06:43:31.860031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.740 [2024-11-20 06:43:31.860040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.740 [2024-11-20 06:43:31.860210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.740 [2024-11-20 06:43:31.860362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.741 [2024-11-20 06:43:31.860368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.741 [2024-11-20 06:43:31.860373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.741 [2024-11-20 06:43:31.860386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.741 [2024-11-20 06:43:31.871995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.741 [2024-11-20 06:43:31.872550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.741 [2024-11-20 06:43:31.872581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.741 [2024-11-20 06:43:31.872589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.741 [2024-11-20 06:43:31.872756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.741 [2024-11-20 06:43:31.872907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.741 [2024-11-20 06:43:31.872914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.741 [2024-11-20 06:43:31.872919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.741 [2024-11-20 06:43:31.872925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.741 [2024-11-20 06:43:31.884690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.741 [2024-11-20 06:43:31.885253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.741 [2024-11-20 06:43:31.885283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.741 [2024-11-20 06:43:31.885292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.741 [2024-11-20 06:43:31.885459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.741 [2024-11-20 06:43:31.885610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.741 [2024-11-20 06:43:31.885617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.741 [2024-11-20 06:43:31.885622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.741 [2024-11-20 06:43:31.885628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.741 [2024-11-20 06:43:31.897379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.741 [2024-11-20 06:43:31.897969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.741 [2024-11-20 06:43:31.898000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.741 [2024-11-20 06:43:31.898009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.741 [2024-11-20 06:43:31.898179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.741 [2024-11-20 06:43:31.898331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.741 [2024-11-20 06:43:31.898337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.741 [2024-11-20 06:43:31.898343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.741 [2024-11-20 06:43:31.898349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.741 [2024-11-20 06:43:31.910094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.741 [2024-11-20 06:43:31.910646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.741 [2024-11-20 06:43:31.910677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.741 [2024-11-20 06:43:31.910685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.741 [2024-11-20 06:43:31.910850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.741 [2024-11-20 06:43:31.911001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.741 [2024-11-20 06:43:31.911008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.741 [2024-11-20 06:43:31.911013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.741 [2024-11-20 06:43:31.911019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.741 4381.67 IOPS, 17.12 MiB/s [2024-11-20T05:43:32.020Z] [2024-11-20 06:43:31.922764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.741 [2024-11-20 06:43:31.923152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.741 [2024-11-20 06:43:31.923172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.741 [2024-11-20 06:43:31.923178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.741 [2024-11-20 06:43:31.923327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.741 [2024-11-20 06:43:31.923476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.741 [2024-11-20 06:43:31.923482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.741 [2024-11-20 06:43:31.923489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.741 [2024-11-20 06:43:31.923494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.741 [2024-11-20 06:43:31.935374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.741 [2024-11-20 06:43:31.935931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.741 [2024-11-20 06:43:31.935962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.741 [2024-11-20 06:43:31.935971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.741 [2024-11-20 06:43:31.936135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.741 [2024-11-20 06:43:31.936293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.741 [2024-11-20 06:43:31.936300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.741 [2024-11-20 06:43:31.936307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.741 [2024-11-20 06:43:31.936313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.741 [2024-11-20 06:43:31.948084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.741 [2024-11-20 06:43:31.948569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.741 [2024-11-20 06:43:31.948584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.741 [2024-11-20 06:43:31.948593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.741 [2024-11-20 06:43:31.948743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.741 [2024-11-20 06:43:31.948892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.741 [2024-11-20 06:43:31.948897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.741 [2024-11-20 06:43:31.948902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.741 [2024-11-20 06:43:31.948907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.741 [2024-11-20 06:43:31.960791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.741 [2024-11-20 06:43:31.961113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.741 [2024-11-20 06:43:31.961125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.741 [2024-11-20 06:43:31.961131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.741 [2024-11-20 06:43:31.961282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.741 [2024-11-20 06:43:31.961432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.741 [2024-11-20 06:43:31.961438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.741 [2024-11-20 06:43:31.961443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.741 [2024-11-20 06:43:31.961447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.741 [2024-11-20 06:43:31.973475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.741 [2024-11-20 06:43:31.973928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.741 [2024-11-20 06:43:31.973940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.741 [2024-11-20 06:43:31.973946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.741 [2024-11-20 06:43:31.974094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.741 [2024-11-20 06:43:31.974247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.741 [2024-11-20 06:43:31.974253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.741 [2024-11-20 06:43:31.974258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.741 [2024-11-20 06:43:31.974263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.741 [2024-11-20 06:43:31.986166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.741 [2024-11-20 06:43:31.986620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.741 [2024-11-20 06:43:31.986632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.742 [2024-11-20 06:43:31.986637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.742 [2024-11-20 06:43:31.986786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.742 [2024-11-20 06:43:31.986934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.742 [2024-11-20 06:43:31.986942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.742 [2024-11-20 06:43:31.986947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.742 [2024-11-20 06:43:31.986952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.742 [2024-11-20 06:43:31.998754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.742 [2024-11-20 06:43:31.999264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.742 [2024-11-20 06:43:31.999295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.742 [2024-11-20 06:43:31.999303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.742 [2024-11-20 06:43:31.999470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.742 [2024-11-20 06:43:31.999622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.742 [2024-11-20 06:43:31.999628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.742 [2024-11-20 06:43:31.999634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.742 [2024-11-20 06:43:31.999639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:11.742 [2024-11-20 06:43:32.011395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:11.742 [2024-11-20 06:43:32.011981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.742 [2024-11-20 06:43:32.012012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:11.742 [2024-11-20 06:43:32.012021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:11.742 [2024-11-20 06:43:32.012192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:11.742 [2024-11-20 06:43:32.012344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:11.742 [2024-11-20 06:43:32.012350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:11.742 [2024-11-20 06:43:32.012355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:11.742 [2024-11-20 06:43:32.012361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.004 [2024-11-20 06:43:32.024106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.004 [2024-11-20 06:43:32.024582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.004 [2024-11-20 06:43:32.024598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.004 [2024-11-20 06:43:32.024604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.004 [2024-11-20 06:43:32.024753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.004 [2024-11-20 06:43:32.024902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.004 [2024-11-20 06:43:32.024908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.004 [2024-11-20 06:43:32.024914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.004 [2024-11-20 06:43:32.024922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.004 [2024-11-20 06:43:32.036816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.004 [2024-11-20 06:43:32.037402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.004 [2024-11-20 06:43:32.037434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.004 [2024-11-20 06:43:32.037442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.004 [2024-11-20 06:43:32.037609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.004 [2024-11-20 06:43:32.037760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.004 [2024-11-20 06:43:32.037766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.004 [2024-11-20 06:43:32.037772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.004 [2024-11-20 06:43:32.037777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.004 [2024-11-20 06:43:32.049392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.004 [2024-11-20 06:43:32.049894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.004 [2024-11-20 06:43:32.049909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.004 [2024-11-20 06:43:32.049914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.004 [2024-11-20 06:43:32.050063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.004 [2024-11-20 06:43:32.050218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.004 [2024-11-20 06:43:32.050225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.004 [2024-11-20 06:43:32.050230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.004 [2024-11-20 06:43:32.050235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.005 [2024-11-20 06:43:32.061977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.005 [2024-11-20 06:43:32.062529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.005 [2024-11-20 06:43:32.062560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.005 [2024-11-20 06:43:32.062569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.005 [2024-11-20 06:43:32.062733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.005 [2024-11-20 06:43:32.062884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.005 [2024-11-20 06:43:32.062890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.005 [2024-11-20 06:43:32.062896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.005 [2024-11-20 06:43:32.062902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.005 [2024-11-20 06:43:32.074660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.005 [2024-11-20 06:43:32.075169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.005 [2024-11-20 06:43:32.075184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.005 [2024-11-20 06:43:32.075189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.005 [2024-11-20 06:43:32.075338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.005 [2024-11-20 06:43:32.075486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.005 [2024-11-20 06:43:32.075492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.005 [2024-11-20 06:43:32.075497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.005 [2024-11-20 06:43:32.075502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.005 [2024-11-20 06:43:32.087260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.005 [2024-11-20 06:43:32.087713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.005 [2024-11-20 06:43:32.087727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.005 [2024-11-20 06:43:32.087732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.005 [2024-11-20 06:43:32.087882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.005 [2024-11-20 06:43:32.088030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.005 [2024-11-20 06:43:32.088036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.005 [2024-11-20 06:43:32.088041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.005 [2024-11-20 06:43:32.088045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.005 [2024-11-20 06:43:32.099915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.005 [2024-11-20 06:43:32.100527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.005 [2024-11-20 06:43:32.100558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.005 [2024-11-20 06:43:32.100567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.005 [2024-11-20 06:43:32.100731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.005 [2024-11-20 06:43:32.100882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.005 [2024-11-20 06:43:32.100889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.005 [2024-11-20 06:43:32.100894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.005 [2024-11-20 06:43:32.100900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.005 [2024-11-20 06:43:32.112509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.005 [2024-11-20 06:43:32.113052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.005 [2024-11-20 06:43:32.113082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.005 [2024-11-20 06:43:32.113091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.005 [2024-11-20 06:43:32.113266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.005 [2024-11-20 06:43:32.113419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.005 [2024-11-20 06:43:32.113425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.005 [2024-11-20 06:43:32.113431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.005 [2024-11-20 06:43:32.113437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.005 [2024-11-20 06:43:32.125188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.005 [2024-11-20 06:43:32.125808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.005 [2024-11-20 06:43:32.125839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.005 [2024-11-20 06:43:32.125848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.005 [2024-11-20 06:43:32.126014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.005 [2024-11-20 06:43:32.126171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.005 [2024-11-20 06:43:32.126179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.005 [2024-11-20 06:43:32.126185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.005 [2024-11-20 06:43:32.126191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.005 [2024-11-20 06:43:32.137790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.005 [2024-11-20 06:43:32.138298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.005 [2024-11-20 06:43:32.138330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.005 [2024-11-20 06:43:32.138339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.005 [2024-11-20 06:43:32.138505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.005 [2024-11-20 06:43:32.138657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.005 [2024-11-20 06:43:32.138663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.005 [2024-11-20 06:43:32.138670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.005 [2024-11-20 06:43:32.138676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.005 [2024-11-20 06:43:32.150414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.005 [2024-11-20 06:43:32.150759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.005 [2024-11-20 06:43:32.150774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.005 [2024-11-20 06:43:32.150780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.005 [2024-11-20 06:43:32.150928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.005 [2024-11-20 06:43:32.151077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.005 [2024-11-20 06:43:32.151087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.005 [2024-11-20 06:43:32.151092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.005 [2024-11-20 06:43:32.151097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.005 [2024-11-20 06:43:32.163122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.005 [2024-11-20 06:43:32.163719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.005 [2024-11-20 06:43:32.163750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.005 [2024-11-20 06:43:32.163759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.005 [2024-11-20 06:43:32.163923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.005 [2024-11-20 06:43:32.164075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.005 [2024-11-20 06:43:32.164081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.005 [2024-11-20 06:43:32.164086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.005 [2024-11-20 06:43:32.164092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.005 [2024-11-20 06:43:32.175706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.005 [2024-11-20 06:43:32.176274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.005 [2024-11-20 06:43:32.176305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.005 [2024-11-20 06:43:32.176313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.005 [2024-11-20 06:43:32.176480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.005 [2024-11-20 06:43:32.176639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.005 [2024-11-20 06:43:32.176646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.005 [2024-11-20 06:43:32.176652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.006 [2024-11-20 06:43:32.176657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.006 [2024-11-20 06:43:32.188418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.006 [2024-11-20 06:43:32.188870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.006 [2024-11-20 06:43:32.188885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.006 [2024-11-20 06:43:32.188890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.006 [2024-11-20 06:43:32.189039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.006 [2024-11-20 06:43:32.189193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.006 [2024-11-20 06:43:32.189199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.006 [2024-11-20 06:43:32.189204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.006 [2024-11-20 06:43:32.189212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:12.006 [2024-11-20 06:43:32.201099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.006 [2024-11-20 06:43:32.201679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.006 [2024-11-20 06:43:32.201710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.006 [2024-11-20 06:43:32.201719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.006 [2024-11-20 06:43:32.201884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.006 [2024-11-20 06:43:32.202036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.006 [2024-11-20 06:43:32.202042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.006 [2024-11-20 06:43:32.202047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.006 [2024-11-20 06:43:32.202053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.006 [2024-11-20 06:43:32.213806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.006 [2024-11-20 06:43:32.214387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.006 [2024-11-20 06:43:32.214418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.006 [2024-11-20 06:43:32.214426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.006 [2024-11-20 06:43:32.214593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.006 [2024-11-20 06:43:32.214745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.006 [2024-11-20 06:43:32.214751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.006 [2024-11-20 06:43:32.214757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.006 [2024-11-20 06:43:32.214762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.006 [2024-11-20 06:43:32.226519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.006 [2024-11-20 06:43:32.227000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.006 [2024-11-20 06:43:32.227031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.006 [2024-11-20 06:43:32.227041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.006 [2024-11-20 06:43:32.227212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.006 [2024-11-20 06:43:32.227365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.006 [2024-11-20 06:43:32.227371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.006 [2024-11-20 06:43:32.227380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.006 [2024-11-20 06:43:32.227386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:12.006 [2024-11-20 06:43:32.239136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.006 [2024-11-20 06:43:32.239544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.006 [2024-11-20 06:43:32.239576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.006 [2024-11-20 06:43:32.239585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.006 [2024-11-20 06:43:32.239749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.006 [2024-11-20 06:43:32.239900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.006 [2024-11-20 06:43:32.239907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.006 [2024-11-20 06:43:32.239912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.006 [2024-11-20 06:43:32.239918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.006 [2024-11-20 06:43:32.241283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.006 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:12.006 [2024-11-20 06:43:32.251813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.006 [2024-11-20 06:43:32.252452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.006 [2024-11-20 06:43:32.252483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.006 [2024-11-20 06:43:32.252492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.006 [2024-11-20 06:43:32.252656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.006 [2024-11-20 06:43:32.252807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.006 [2024-11-20 06:43:32.252813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.006 [2024-11-20 06:43:32.252818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.006 [2024-11-20 06:43:32.252824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.006 [2024-11-20 06:43:32.264415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.006 [2024-11-20 06:43:32.265001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.006 [2024-11-20 06:43:32.265031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.006 [2024-11-20 06:43:32.265044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.006 [2024-11-20 06:43:32.265213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.006 [2024-11-20 06:43:32.265366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.006 [2024-11-20 06:43:32.265372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.006 [2024-11-20 06:43:32.265377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.006 [2024-11-20 06:43:32.265383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.006 [2024-11-20 06:43:32.277136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.006 [2024-11-20 06:43:32.277499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.006 [2024-11-20 06:43:32.277514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.006 [2024-11-20 06:43:32.277520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.006 [2024-11-20 06:43:32.277670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.006 [2024-11-20 06:43:32.277818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.006 [2024-11-20 06:43:32.277824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.006 [2024-11-20 06:43:32.277829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.006 [2024-11-20 06:43:32.277834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.006 Malloc0 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:12.268 [2024-11-20 06:43:32.289714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.268 [2024-11-20 06:43:32.290226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.268 [2024-11-20 06:43:32.290241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.268 [2024-11-20 06:43:32.290246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.268 [2024-11-20 06:43:32.290395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.268 [2024-11-20 06:43:32.290544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.268 [2024-11-20 06:43:32.290549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.268 [2024-11-20 06:43:32.290555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.268 [2024-11-20 06:43:32.290560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:12.268 [2024-11-20 06:43:32.302429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.268 [2024-11-20 06:43:32.303070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.268 [2024-11-20 06:43:32.303101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954000 with addr=10.0.0.2, port=4420 00:33:12.268 [2024-11-20 06:43:32.303110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954000 is same with the state(6) to be set 00:33:12.268 [2024-11-20 06:43:32.303281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954000 (9): Bad file descriptor 00:33:12.268 [2024-11-20 06:43:32.303433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:12.268 [2024-11-20 06:43:32.303440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:12.268 [2024-11-20 06:43:32.303445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:12.268 [2024-11-20 06:43:32.303451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:12.268 [2024-11-20 06:43:32.310548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.268 [2024-11-20 06:43:32.315051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.268 06:43:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3020864 00:33:12.268 [2024-11-20 06:43:32.379661] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:33:13.827 4726.43 IOPS, 18.46 MiB/s [2024-11-20T05:43:35.049Z] 5740.88 IOPS, 22.43 MiB/s [2024-11-20T05:43:35.992Z] 6559.56 IOPS, 25.62 MiB/s [2024-11-20T05:43:36.934Z] 7188.70 IOPS, 28.08 MiB/s [2024-11-20T05:43:38.321Z] 7710.73 IOPS, 30.12 MiB/s [2024-11-20T05:43:39.264Z] 8133.92 IOPS, 31.77 MiB/s [2024-11-20T05:43:40.207Z] 8500.77 IOPS, 33.21 MiB/s [2024-11-20T05:43:41.150Z] 8820.36 IOPS, 34.45 MiB/s 00:33:20.871 Latency(us) 00:33:20.871 [2024-11-20T05:43:41.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.871 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:20.871 Verification LBA range: start 0x0 length 0x4000 00:33:20.871 Nvme1n1 : 15.00 9085.65 35.49 13322.45 0.00 5693.69 556.37 14854.83 00:33:20.871 [2024-11-20T05:43:41.150Z] =================================================================================================================== 00:33:20.871 [2024-11-20T05:43:41.150Z] Total : 9085.65 35.49 13322.45 0.00 5693.69 556.37 14854.83 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.871 rmmod nvme_tcp 00:33:20.871 rmmod nvme_fabrics 00:33:20.871 rmmod nvme_keyring 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3021974 ']' 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3021974 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3021974 ']' 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3021974 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:20.871 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3021974 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3021974' 00:33:21.133 killing process with pid 3021974 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3021974 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3021974 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.133 06:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:23.680 00:33:23.680 real 0m28.446s 00:33:23.680 user 1m3.774s 00:33:23.680 sys 0m7.814s 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.680 ************************************ 00:33:23.680 END TEST nvmf_bdevperf 00:33:23.680 ************************************ 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.680 ************************************ 00:33:23.680 START TEST nvmf_target_disconnect 00:33:23.680 ************************************ 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:23.680 * Looking for test storage... 00:33:23.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.680 --rc genhtml_branch_coverage=1 00:33:23.680 --rc genhtml_function_coverage=1 00:33:23.680 --rc genhtml_legend=1 00:33:23.680 --rc geninfo_all_blocks=1 00:33:23.680 --rc geninfo_unexecuted_blocks=1 00:33:23.680 00:33:23.680 ' 00:33:23.680 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.680 --rc genhtml_branch_coverage=1 00:33:23.680 --rc genhtml_function_coverage=1 00:33:23.680 --rc genhtml_legend=1 00:33:23.680 --rc geninfo_all_blocks=1 00:33:23.680 --rc geninfo_unexecuted_blocks=1 00:33:23.680 00:33:23.681 ' 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:23.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.681 --rc genhtml_branch_coverage=1 00:33:23.681 --rc genhtml_function_coverage=1 00:33:23.681 --rc genhtml_legend=1 00:33:23.681 --rc geninfo_all_blocks=1 00:33:23.681 --rc geninfo_unexecuted_blocks=1 00:33:23.681 00:33:23.681 ' 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:23.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.681 --rc genhtml_branch_coverage=1 00:33:23.681 --rc genhtml_function_coverage=1 00:33:23.681 --rc genhtml_legend=1 00:33:23.681 --rc geninfo_all_blocks=1 00:33:23.681 --rc geninfo_unexecuted_blocks=1 00:33:23.681 00:33:23.681 ' 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:23.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.681 06:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:31.833 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:31.833 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:31.833 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:31.833 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:31.833 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.834 06:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:33:31.834 00:33:31.834 --- 10.0.0.2 ping statistics --- 00:33:31.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.834 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:33:31.834 00:33:31.834 --- 10.0.0.1 ping statistics --- 00:33:31.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.834 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:31.834 ************************************ 00:33:31.834 START TEST nvmf_target_disconnect_tc1 00:33:31.834 ************************************ 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:31.834 [2024-11-20 06:43:51.352966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.834 [2024-11-20 06:43:51.353078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6dad0 with addr=10.0.0.2, port=4420 00:33:31.834 [2024-11-20 06:43:51.353115] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:31.834 [2024-11-20 06:43:51.353127] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:31.834 [2024-11-20 06:43:51.353136] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:33:31.834 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:31.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:31.834 Initializing NVMe Controllers 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:31.834 00:33:31.834 real 0m0.146s 00:33:31.834 user 0m0.065s 00:33:31.834 sys 0m0.080s 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:31.834 ************************************ 00:33:31.834 END TEST nvmf_target_disconnect_tc1 00:33:31.834 ************************************ 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:31.834 ************************************ 00:33:31.834 START TEST nvmf_target_disconnect_tc2 00:33:31.834 ************************************ 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3028031 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3028031 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3028031 ']' 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:31.834 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.835 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:31.835 06:43:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:31.835 [2024-11-20 06:43:51.518188] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:33:31.835 [2024-11-20 06:43:51.518259] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.835 [2024-11-20 06:43:51.620180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:31.835 [2024-11-20 06:43:51.672612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.835 [2024-11-20 06:43:51.672662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.835 [2024-11-20 06:43:51.672671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.835 [2024-11-20 06:43:51.672678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.835 [2024-11-20 06:43:51.672684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.835 [2024-11-20 06:43:51.674878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:31.835 [2024-11-20 06:43:51.675039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:31.835 [2024-11-20 06:43:51.675254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:31.835 [2024-11-20 06:43:51.675255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:32.096 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:32.096 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:33:32.096 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:32.096 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:32.096 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:32.358 Malloc0 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:32.358 [2024-11-20 06:43:52.431418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:32.358 [2024-11-20 06:43:52.471887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3028379 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:32.358 06:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:34.276 06:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3028031 00:33:34.276 06:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.276 starting I/O failed 00:33:34.276 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 [2024-11-20 06:43:54.509642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Read completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 Write completed with error (sct=0, sc=8) 00:33:34.277 starting I/O failed 00:33:34.277 [2024-11-20 06:43:54.510001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.277 [2024-11-20 06:43:54.510524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.510578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.510936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.510950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.511230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.511246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.511663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.511725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.511996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.512013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.512241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.512257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.512584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.512597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.512961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.512974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.513340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.513372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.513733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.513747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.514029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.514044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.514433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.514448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.514793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.514807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.515151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.515171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.515518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.515532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.515830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.515844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.516071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.516084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.516300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.516315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.516628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.516642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.516966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.277 [2024-11-20 06:43:54.516979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.277 qpair failed and we were unable to recover it. 00:33:34.277 [2024-11-20 06:43:54.517348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.517363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.517700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.517714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.518004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.518017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.518344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.518358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.518657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.518671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.518991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.519006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.519206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.519220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.519441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.519456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.519791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.519806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.520155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.520180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.520404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.520421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.520748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.520762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.521065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.521079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.521388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.521401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.521746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.521760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.522058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.522072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.522387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.522401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.522735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.522755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.522957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.522970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.523266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.523279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.523622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.523635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.523956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.523969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.524189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.524203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.524538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.524554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.524863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.524877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.525219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.525234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.525574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.525589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.525924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.525937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.526273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.526287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.526625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.526638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.526959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.526972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.527261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.527275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.527462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.527476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.527813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.527826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.528048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.528063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.528359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.528372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.528693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.528707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.529005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.529020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.278 qpair failed and we were unable to recover it. 00:33:34.278 [2024-11-20 06:43:54.529346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.278 [2024-11-20 06:43:54.529361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.529660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.529673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.529954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.529968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.530289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.530303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.530611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.530625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.530941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.530957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.531276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.531292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.531474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.531492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.531831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.531848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.532183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.532199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.532529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.532545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.532849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.532864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.533223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.533240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.533546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.533562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.533868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.533885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.534183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.534199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.534517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.534532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.534881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.534898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.535208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.535225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.535535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.535552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.535847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.535863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.536198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.536215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.536529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.536545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.536726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.536743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.537083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.537100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.537419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.537436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.537756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.537773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.538067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.538083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.538394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.538412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.538723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.538739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.539039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.539054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.539371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.539387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.539726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.539742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.540052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.540068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.540364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.540380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.540704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.540721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.541018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.541034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.541362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.279 [2024-11-20 06:43:54.541379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.279 qpair failed and we were unable to recover it. 00:33:34.279 [2024-11-20 06:43:54.541690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.541705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.542007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.542023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.542372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.542392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.542691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.542706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.543037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.543056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.543367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.543387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.543699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.543719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.544051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.544070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.544405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.544425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.544772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.544793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.545091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.545111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.549207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.549252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.549635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.549658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.550004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.550027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.550354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.550374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.280 [2024-11-20 06:43:54.550700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.280 [2024-11-20 06:43:54.550719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.280 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.551088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.551111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.551483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.551508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.551859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.551880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.552186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.552207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.552538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.552556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.552881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.552899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.553254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.553280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.553659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.553681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.554020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.554046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.554402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.554423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.554804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.554826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.555208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.555230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.555573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.555593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.555916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.555947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.556277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.556298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.556615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.556633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.556950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.556967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.557295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.557314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.557657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.557675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.557990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.558009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.558381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.558407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.558584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.558605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.558929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.558946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.559268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.559286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.559607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.559623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.559973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.559991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.560194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.560218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.560544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.552 [2024-11-20 06:43:54.560560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.552 qpair failed and we were unable to recover it. 00:33:34.552 [2024-11-20 06:43:54.560883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.560901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.561221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.561239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.561545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.561561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.561873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.561889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.562228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.562245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.562564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.562581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.562908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.562926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.563243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.563260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.563592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.563608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.563925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.563942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.564266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.564283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.564620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.564636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.564954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.564974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.565309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.565328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.565657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.565677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.566016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.566034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.566369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.566388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.566691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.566709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.567041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.567061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.567379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.567399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.567730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.567749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.568087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.568104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.568440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.568461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.568795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.568813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.569144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.569172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.569473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.569492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.569829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.569850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.570205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.570226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.570508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.570528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.570847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.570865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.571107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.571125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.571475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.571496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.571806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.571824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.572154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.572183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.572522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.572544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.572864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.572883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.573223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.573243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.573587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.553 [2024-11-20 06:43:54.573605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.553 qpair failed and we were unable to recover it. 00:33:34.553 [2024-11-20 06:43:54.573972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.573992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.574330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.574350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.574586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.574605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.574930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.574948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.575168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.575194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.575552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.575577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.575938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.575964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.576348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.576375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.576736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.576762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.577118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.577142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.577508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.577535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.577879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.577906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.578263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.578289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.578623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.578649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.578998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.579023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.579372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.579402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.579745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.579770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.580126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.580153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.580511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.580536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.580916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.580942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.581300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.581327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.581504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.581528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.581891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.581916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.582285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.582314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.582525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.582551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.582875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.582899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.583261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.583287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.583662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.583689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.584028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.584052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.584418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.584446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.584843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.584868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.585203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.585229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.585573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.585597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.585957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.585982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.586365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.586392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.586740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.586766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.587116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.587145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.587517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.554 [2024-11-20 06:43:54.587550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.554 qpair failed and we were unable to recover it. 00:33:34.554 [2024-11-20 06:43:54.587913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.587944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.588295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.588327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.588680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.588712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.589056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.589085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.589535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.589573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.589972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.590003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.590340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.590371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.590735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.590765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.591116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.591149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.591524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.591554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.591907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.591937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.592333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.592366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.592714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.592746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.593108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.593138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.593494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.593527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.593776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.593805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.594145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.594199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.594437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.594466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.594823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.594854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.595210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.595243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.595645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.595676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.596024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.596056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.596413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.596444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.596840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.596872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.597266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.597297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.597691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.597722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.598072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.598104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.598501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.598533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.598907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.598940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.599334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.599367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.599608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.599638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.599981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.600017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.600422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.600454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.600831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.600862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.601310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.601341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.601737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.601768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.602116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.602148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.602531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.602561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.555 qpair failed and we were unable to recover it. 00:33:34.555 [2024-11-20 06:43:54.602955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.555 [2024-11-20 06:43:54.602987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.603351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.603384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.603738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.603770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.604048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.604083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.604471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.604502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.604852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.604884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.605280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.605312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.605663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.605696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.606094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.606124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Write completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 Read completed with error (sct=0, sc=8) 00:33:34.556 starting I/O failed 00:33:34.556 [2024-11-20 06:43:54.606527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:34.556 [2024-11-20 06:43:54.606962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.606995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.607439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.607500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.607880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.607899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.608355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.608423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.608796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.608822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.609142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.609166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.609606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.609671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.610088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.610108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.610542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.610609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.610981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.611000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.611441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.611507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.611741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.611760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.612092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.556 [2024-11-20 06:43:54.612106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.556 qpair failed and we were unable to recover it. 00:33:34.556 [2024-11-20 06:43:54.612449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.612466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.612800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.612815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.613162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.613179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.613523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.613537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.613869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.613884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.614225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.614242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.614575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.614589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.614928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.614943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.615322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.615338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.615722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.615737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.615919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.615934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.616276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.616291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.616629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.616642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.616973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.616987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.617324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.617339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.617671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.617685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.618030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.618046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.618397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.618413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.618751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.618770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.619094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.619109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.619469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.619484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.619817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.619831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.620181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.620196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.620572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.620587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.620931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.620944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.621294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.621308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.621664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.621679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.622018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.622033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.622378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.622394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.622734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.622754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.623093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.623113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.623451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.623472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.623680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.623702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.624036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.624055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.624401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.624421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.624748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.624768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.625149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.557 [2024-11-20 06:43:54.625175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.557 qpair failed and we were unable to recover it. 00:33:34.557 [2024-11-20 06:43:54.625517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.625537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.625870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.625890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.626256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.626277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.626619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.626639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.626968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.626989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.627320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.627340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.627675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.627696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.627907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.627928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.628266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.628286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.628629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.628648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.629044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.629064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.629398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.629428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.629816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.629835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.630178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.630200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.630531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.630550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.630886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.630907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.631246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.631267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.631623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.631643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.631983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.632002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.632340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.632361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.632709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.632728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.633111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.633134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.633450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.633470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.633798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.633817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.634153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.634210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.634573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.634596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.634966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.634990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.635317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.635342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.635720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.635743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.636075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.636100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.636475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.636500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.636903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.636927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.637283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.637309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.637664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.637687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.638048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.638071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.638435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.638460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.638797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.638821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.639181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.558 [2024-11-20 06:43:54.639208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.558 qpair failed and we were unable to recover it. 00:33:34.558 [2024-11-20 06:43:54.639572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.639595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.639929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.639953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.640318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.640342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.640724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.640747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.641105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.641129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.641496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.641521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.641880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.641904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.642243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.642268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.642654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.642678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.643037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.643062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.643311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.643337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.643589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.643615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.643955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.643978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.644333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.644358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.644704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.644727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.645140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.645170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.645552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.645575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.645939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.645962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.646309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.646340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.646689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.646719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.647089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.647120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.647509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.647541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.647912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.647943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.648305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.648346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.648732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.648763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.649167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.649200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.649612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.649644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.650005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.650037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.650396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.650431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.650800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.650830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.651197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.651230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.651624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.651654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.652015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.652045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.652416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.652448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.652803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.652833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.653196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.653229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.653651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.653684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.654045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.559 [2024-11-20 06:43:54.654078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.559 qpair failed and we were unable to recover it. 00:33:34.559 [2024-11-20 06:43:54.654427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.654462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.654726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.654758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.655151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.655191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.655544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.655575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.655935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.655965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.656320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.656354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.656718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.656748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.657106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.657138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.657399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.657435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.657689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.657721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.658075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.658107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.658456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.658489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.658832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.658865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.659237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.659270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.659622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.659656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.660053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.660084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.660449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.660481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.660841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.660873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.661235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.661266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.661631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.661662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.662018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.662050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.662423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.662453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.662807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.662837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.663197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.663231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.663633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.663663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.664018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.664055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.664414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.664448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.664807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.664836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.665184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.665217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.665458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.665492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.665844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.665875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.666238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.666272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.666631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.666662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.667005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.667036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.667405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.667437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.667800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.667830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.668190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.668223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.668577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.560 [2024-11-20 06:43:54.668609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.560 qpair failed and we were unable to recover it. 00:33:34.560 [2024-11-20 06:43:54.668971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.669001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.669368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.669401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.669757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.669789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.670151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.670191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.670547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.670579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.670939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.670971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.671332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.671363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.671729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.671761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.672109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.672141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.672518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.672549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.672907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.672938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.673308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.673341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.673708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.673740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.674108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.674139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.674522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.674554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.674913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.674945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.675304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.675337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.675692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.675723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.676087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.676119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.676520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.676552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.676909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.676940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.677304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.677337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.677688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.677718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.678146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.678185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.678465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.678494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.678728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.678762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.679098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.679129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.679523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.679563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.679916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.679947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.680316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.680350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.680704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.680735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.681090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.681125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.681514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.681546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.561 qpair failed and we were unable to recover it. 00:33:34.561 [2024-11-20 06:43:54.681908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.561 [2024-11-20 06:43:54.681939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.682298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.682331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.682690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.682720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.683075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.683107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.683538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.683569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.683920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.683952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.684322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.684355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.684710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.684742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.685098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.685130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.685499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.685531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.685893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.685925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.686361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.686392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.686741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.686772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.687135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.687178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.687564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.687594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.687840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.687873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.688227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.688259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.688611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.688643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.688994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.689026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.689395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.689427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.689784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.689815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.690248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.690281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.690630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.690662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.691029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.691060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.691289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.691324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.691698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.691730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.692090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.692121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.692526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.692558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.692911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.692941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.693292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.693323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.693684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.693715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.694073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.694105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.694461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.694493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.694849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.694881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.695247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.695286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.695660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.695691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.696048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.696079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.696442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.696476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.562 qpair failed and we were unable to recover it. 00:33:34.562 [2024-11-20 06:43:54.696875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.562 [2024-11-20 06:43:54.696905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.697238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.697270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.697664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.697695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.698047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.698078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.698434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.698467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.698824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.698857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.699215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.699248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.699489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.699524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.699882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.699915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.700148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.700187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.700541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.700573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.700936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.700967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.701322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.701357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.701724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.701754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.703719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.703783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.704195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.704232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.704602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.704636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.705066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.705098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.705489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.705523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.705881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.705911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.706175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.706211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.706558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.706589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.706946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.706979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.707349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.707383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.707734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.707765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.708032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.708063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.708416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.708448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.708803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.708835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.709196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.709226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.709630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.709661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.710014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.710046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.710388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.710421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.710767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.710798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.711175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.711208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.711555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.711587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.711833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.711862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.712213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.712252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.712612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.563 [2024-11-20 06:43:54.712643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.563 qpair failed and we were unable to recover it. 00:33:34.563 [2024-11-20 06:43:54.712988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.713020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.713384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.713417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.713770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.713800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.714173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.714207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.714556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.714586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.714944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.714975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.715331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.715362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.715717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.715748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.716102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.716134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.716377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.716408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.716774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.716807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.717177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.717209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.717576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.717611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.717967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.717998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.718347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.718381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.718815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.718846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.719206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.719239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.719617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.719649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.720008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.720039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.720411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.720445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.720804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.720836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.721205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.721239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.721663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.721694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.722060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.722094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.722468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.722502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.722856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.722888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.723093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.723124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.723501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.723537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.723889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.723920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.724278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.724313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.724556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.724587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.724937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.724970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.725319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.725351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.725750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.725785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.726138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.726180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.726574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.726604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.726957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.726989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.727342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.564 [2024-11-20 06:43:54.727374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.564 qpair failed and we were unable to recover it. 00:33:34.564 [2024-11-20 06:43:54.727726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.727764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.728109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.728142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.728508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.728539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.728899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.728931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.729296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.729330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.729682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.729713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.730072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.730104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.730492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.730526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.730877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.730908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.731278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.731310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.731746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.731778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.732125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.732157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.732532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.732565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.732920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.732952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.733318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.733352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.733704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.733734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.734091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.734121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.734480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.734513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.734867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.734898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.735260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.735292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.735643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.735673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.736037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.736067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.736398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.736431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.736781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.736812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.737178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.737210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.737608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.737640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.737985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.738016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.738461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.738494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.738846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.738877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.739244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.739276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.739643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.739674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.740032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.740062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.740427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.740458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.740815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.740845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.741202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.741234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.741630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.565 [2024-11-20 06:43:54.741662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.565 qpair failed and we were unable to recover it. 00:33:34.565 [2024-11-20 06:43:54.742023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.742054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.742412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.742445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.742802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.742834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.743198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.743228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.743575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.743614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.743853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.743888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.744241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.744273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.744623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.744654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.745015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.745045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.745415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.745448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.745803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.745832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.746183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.746215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.746574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.746605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.746975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.747005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.747341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.747374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.747731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.747761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.748135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.748195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.748335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.748368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.748638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.748671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.749024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.749055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.749415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.749449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.749716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.749746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.750092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.750123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.750483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.750516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.750881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.750912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.751309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.751342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.751705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.751736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.752087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.752119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.752479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.752510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.752863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.752894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.753246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.753278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.753687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.753718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.754069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.754101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.754462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.754493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.566 qpair failed and we were unable to recover it. 00:33:34.566 [2024-11-20 06:43:54.754849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.566 [2024-11-20 06:43:54.754880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.755234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.755268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.755633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.755663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.755911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.755941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.756299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.756332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.756583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.756612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.756852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.756886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.757281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.757314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.757669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.757701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.758053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.758083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.758455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.758493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.758846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.758879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.759243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.759276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.759636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.759667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.760024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.760055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.760399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.760432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.760779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.760809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.761063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.761092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.761448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.761479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.761852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.761883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.762299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.762332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.762692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.762725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.763642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.763691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.764081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.764114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.764413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.764446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.764809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.764840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.765191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.765222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.765568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.765598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.766026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.766057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.766393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.766424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.766788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.766818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.767178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.767212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.767579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.767609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.767971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.768002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.768345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.768377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.768737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.768768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.769180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.769214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.769487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.769517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.567 qpair failed and we were unable to recover it. 00:33:34.567 [2024-11-20 06:43:54.769885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.567 [2024-11-20 06:43:54.769914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.770321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.770355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.770786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.770817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.771199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.771233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.771541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.771572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.772400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.772443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.772830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.772861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.773220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.773253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.774145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.774202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.774580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.774613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.774968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.775000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.775350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.775384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.775747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.775789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.776148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.776187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.776541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.776572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.776838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.776869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.777243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.777275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.777633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.777665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.778029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.778061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.778415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.778448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.778815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.778846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.779200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.779234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.779563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.779593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.779997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.780029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.780401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.780434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.780798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.780831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.781390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.781508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.781950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.781987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.782476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.782581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.783008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.783043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.783523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.783626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.783921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.783955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.784226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.784261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.784633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.784662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.785043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.785071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.785433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.785462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.785823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.785851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.786129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.568 [2024-11-20 06:43:54.786174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.568 qpair failed and we were unable to recover it. 00:33:34.568 [2024-11-20 06:43:54.786550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.786580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.786944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.786973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.787252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.787281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.787664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.787692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.788083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.788111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.788465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.788494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.788859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.788887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.789234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.789264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.789596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.789624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.789993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.790023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.790389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.790417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.790773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.790802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.791154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.791195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.791422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.791450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.791799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.791843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.792235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.792265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.792607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.792639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.792886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.792920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.793320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.793352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.793698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.793731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.794105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.794137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.794520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.794553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.794900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.794931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.795292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.795324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.795668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.795701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.796062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.796094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.796448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.796482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.796841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.796872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.797240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.797274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.797632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.797663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.798020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.798051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.798462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.798493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.798853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.798883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.799240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.799273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.799617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.799649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.800004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.800036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.800414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.800447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.800804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.569 [2024-11-20 06:43:54.800836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.569 qpair failed and we were unable to recover it. 00:33:34.569 [2024-11-20 06:43:54.801203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.801235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.801589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.801619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.801992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.802021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.802380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.802413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.802768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.802799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.803155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.803195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.803556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.803586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.803939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.803971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.804334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.804369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.804730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.804760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.805005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.805035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.805398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.805430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.805789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.805821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.806195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.806227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.806620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.806651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.807008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.807040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.807406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.807446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.807816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.807848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.808203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.808235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.808676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.808706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.809054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.809086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.809468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.809500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.809858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.809891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.810254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.810285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.810657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.810689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.811043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.811074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.811414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.811445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.811794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.811825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.812186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.812220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.812626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.812656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.813041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.813072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.813430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.813464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.813832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.813862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.570 [2024-11-20 06:43:54.814224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.570 [2024-11-20 06:43:54.814255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.570 qpair failed and we were unable to recover it. 00:33:34.571 [2024-11-20 06:43:54.814616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.571 [2024-11-20 06:43:54.814647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.571 qpair failed and we were unable to recover it. 00:33:34.571 [2024-11-20 06:43:54.814937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.571 [2024-11-20 06:43:54.814967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.571 qpair failed and we were unable to recover it. 00:33:34.571 [2024-11-20 06:43:54.815331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.571 [2024-11-20 06:43:54.815363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.571 qpair failed and we were unable to recover it. 00:33:34.571 [2024-11-20 06:43:54.815729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.571 [2024-11-20 06:43:54.815760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.571 qpair failed and we were unable to recover it. 00:33:34.571 [2024-11-20 06:43:54.816116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.571 [2024-11-20 06:43:54.816146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.571 qpair failed and we were unable to recover it. 00:33:34.571 [2024-11-20 06:43:54.816529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.571 [2024-11-20 06:43:54.816561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.571 qpair failed and we were unable to recover it. 00:33:34.571 [2024-11-20 06:43:54.816828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.571 [2024-11-20 06:43:54.816859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.571 qpair failed and we were unable to recover it. 00:33:34.571 [2024-11-20 06:43:54.817215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.571 [2024-11-20 06:43:54.817248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.571 qpair failed and we were unable to recover it. 00:33:34.843 [2024-11-20 06:43:54.817612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.843 [2024-11-20 06:43:54.817646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.843 qpair failed and we were unable to recover it. 00:33:34.843 [2024-11-20 06:43:54.818014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.843 [2024-11-20 06:43:54.818057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.843 qpair failed and we were unable to recover it. 00:33:34.843 [2024-11-20 06:43:54.818419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.843 [2024-11-20 06:43:54.818452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.843 qpair failed and we were unable to recover it. 00:33:34.843 [2024-11-20 06:43:54.818854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.843 [2024-11-20 06:43:54.818885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.843 qpair failed and we were unable to recover it. 00:33:34.843 [2024-11-20 06:43:54.819228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.843 [2024-11-20 06:43:54.819262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.843 qpair failed and we were unable to recover it. 00:33:34.843 [2024-11-20 06:43:54.819629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.819659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.820020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.820054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.820416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.820448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.820812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.820843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.821198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.821231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.821579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.821610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.821977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.822010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.822269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.822301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.822663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.822693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.823056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.823086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.823441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.823476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.823842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.823873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.824103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.824134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.824398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.824433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.824785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.824815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.825182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.825215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.825559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.825591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.825954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.825985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.826345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.826377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.826739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.826770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.827146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.827184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.827533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.827565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.827926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.827958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.828320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.828353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.828718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.828751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.829106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.829137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.829383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.829420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.829779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.829810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.830182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.830216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.830568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.830599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.830858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.830891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.831275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.831308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.831675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.831706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.832071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.832101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.832492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.832525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.832899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.832930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.833277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.844 [2024-11-20 06:43:54.833317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.844 qpair failed and we were unable to recover it. 00:33:34.844 [2024-11-20 06:43:54.833562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.833593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.833964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.833995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.834380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.834413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.834770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.834800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.835175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.835208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.835615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.835645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.835998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.836028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.836408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.836442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.836811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.836842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.837203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.837237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.837622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.837659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.838008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.838039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.838405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.838437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.838799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.838832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.839081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.839117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.839544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.839578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.839934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.839967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.840326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.840359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.840721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.840752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.841109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.841141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.841485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.841516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.841890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.841921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.842284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.842317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.842681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.842713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.843078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.843108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.843510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.843543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.843776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.843811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.844194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.844227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.844625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.844656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.844885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.844916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.845285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.845316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.845689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.845719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.846082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.846114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.846480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.846514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.846878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.846909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.847279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.847311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.847670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.847702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.845 qpair failed and we were unable to recover it. 00:33:34.845 [2024-11-20 06:43:54.848068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.845 [2024-11-20 06:43:54.848098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.848468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.848503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.848878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.848915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.849272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.849306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.849652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.849682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.850027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.850058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.850427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.850459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.850816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.850847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.851204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.851237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.851656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.851689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.852024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.852055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.852414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.852446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.852811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.852843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.853083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.853117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.853503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.853534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.853893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.853924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.854283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.854318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.854677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.854707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.855060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.855091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.855452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.855484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.855721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.855755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.856111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.856142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.856512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.856544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.856895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.856927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.857313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.857345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.857716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.857747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.858111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.858142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.858541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.858573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.858930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.858962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.859321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.859354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.859709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.859740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.860115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.860147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.860508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.860539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.860892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.860923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.861269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.861301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.861695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.861725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.862084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.862115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.846 [2024-11-20 06:43:54.862476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.846 [2024-11-20 06:43:54.862510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.846 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.862847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.862877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.863246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.863279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.863650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.863682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.864024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.864055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.864408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.864447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.864797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.864830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.865178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.865211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.865577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.865608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.865966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.865999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.866349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.866381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.866752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.866783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.867117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.867149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.867549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.867581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.867929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.867962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.868318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.868350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.868715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.868747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.869179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.869213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.869575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.869607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.869968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.870000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.870343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.870373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.870737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.870767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.871131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.871170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.871504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.871536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.871889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.871922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.872284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.872316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.872678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.872709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.872957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.872992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.873317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.873350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.873711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.873741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.874176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.874208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.874559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.874591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.874949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.874982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.875342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.875373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.875732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.875765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.876130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.876179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.876548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.876579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.876827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.876861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.847 qpair failed and we were unable to recover it. 00:33:34.847 [2024-11-20 06:43:54.877214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.847 [2024-11-20 06:43:54.877246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.877656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.877687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.878034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.878065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.878411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.878442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.878793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.878825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.879192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.879224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.879686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.879717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.880064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.880103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.880475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.880508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.880864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.880895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.881254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.881288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.881650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.881681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.882032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.882062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.882425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.882459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.882817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.882848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.883213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.883247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.883608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.883638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.883987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.884017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.884385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.884418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.884779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.884811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.885179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.885211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.885572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.885604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.885963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.885995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.886338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.886370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.886732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.886764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.887133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.887171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.887532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.887565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.887920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.887950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.888310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.888343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.888576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.888624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.888976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.889007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.848 qpair failed and we were unable to recover it. 00:33:34.848 [2024-11-20 06:43:54.889383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.848 [2024-11-20 06:43:54.889416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.889766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.889798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.890152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.890196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.890408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.890439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.890865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.890895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.891235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.891269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.891629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.891661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.892024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.892055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.892409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.892442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.892803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.892834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.893195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.893227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.893587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.893620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.893850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.893881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.894246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.894279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.894637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.894668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.895034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.895067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.895412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.895450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.895875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.895906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.896258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.896292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.896655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.896685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.897040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.897071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.897434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.897467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.897824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.897854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.898217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.898249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.898628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.898660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.899016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.899048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.899412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.899444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.899868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.899899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.900247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.900280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.900643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.900675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.901032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.901065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.901425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.901456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.901824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.901856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.902237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.902269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.902627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.902659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.902890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.902923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.903269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.903304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.903660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.849 [2024-11-20 06:43:54.903690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.849 qpair failed and we were unable to recover it. 00:33:34.849 [2024-11-20 06:43:54.904064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.904096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.904452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.904485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.904834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.904865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.905228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.905262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.905601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.905632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.905990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.906021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.906400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.906433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.906800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.906832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.907203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.907237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.907613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.907643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.908012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.908042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.908405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.908439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.908802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.908833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.909196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.909227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.909611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.909643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.910003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.910036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.910409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.910441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.910803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.910834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.911193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.911234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.911613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.911644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.911999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.912030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.912400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.912434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.912778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.912809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.913173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.913205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.913560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.913591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.913872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.913903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.914264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.914297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.914659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.914689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.915067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.915098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.915457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.915491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.915842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.915871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.916232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.916265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.916620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.916651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.917006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.917038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.917413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.917444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.917837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.850 [2024-11-20 06:43:54.917869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.850 qpair failed and we were unable to recover it. 00:33:34.850 [2024-11-20 06:43:54.918218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.918251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.918630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.918661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.918997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.919026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.919392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.919424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.919785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.919816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.920196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.920230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.920576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.920607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.920958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.920989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.921370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.921403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.921751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.921783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.922206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.922238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.922587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.922619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.922980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.923013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.923384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.923416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.923786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.923817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.924058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.924092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.924448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.924480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.924829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.924860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.925217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.925252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.925613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.925643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.926012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.926043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.926404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.926434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.926812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.926850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.927204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.927236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.927638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.927669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.928017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.928049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.928407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.928438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.928794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.928825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.929192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.929225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.929579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.851 [2024-11-20 06:43:54.929610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.851 qpair failed and we were unable to recover it. 00:33:34.851 [2024-11-20 06:43:54.929969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.930000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.930341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.930373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.930720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.930751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.931106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.931137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.931512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.931545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.931908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.931941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.932299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.932333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.932694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.932725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.933083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.933115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.933483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.933515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.933864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.933894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.934253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.934286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.934651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.934680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.935047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.935079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.935430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.935461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.935810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.935842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.936199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.936231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.936611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.936644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.937001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.937030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.937408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.937441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.937794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.937826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.938182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.938214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.938563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.938594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.938951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.938983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.939335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.939367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.939726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.939757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.940117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.940149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.940547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.940580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.940925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.940956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.941201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.941236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.941639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.941670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.942049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.942080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.942426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.942465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.942813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.942845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.943256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.943287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.943643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.943675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.944043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.944074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.944446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.852 [2024-11-20 06:43:54.944478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.852 qpair failed and we were unable to recover it. 00:33:34.852 [2024-11-20 06:43:54.944836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.944868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.945225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.945258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.945655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.945686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.946050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.946081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.946443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.946476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.946828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.946860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.947218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.947249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.947613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.947643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.948009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.948042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.948292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.948324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.948664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.948696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.949042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.949072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.949434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.949466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.949822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.949855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.950223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.950254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.950625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.950657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.951024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.951056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.951423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.951456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.951822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.951853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.952214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.952247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.952597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.952629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.953011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.953042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.953407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.953440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.953794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.953825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.853 qpair failed and we were unable to recover it. 00:33:34.853 [2024-11-20 06:43:54.954189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.853 [2024-11-20 06:43:54.954221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.954461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.954490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.954844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.954874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.955234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.955266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.955628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.955657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.956014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.956044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.956404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.956436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.956798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.956831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.957187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.957219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.957580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.957613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.957973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.958009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.958342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.958375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.958781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.958812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.959178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.959210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.959564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.959595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.959947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.959977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.960337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.960370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.960730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.960761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.961020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.961050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.961424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.961457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.961811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.961844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.962201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.962233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.962611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.962642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.963000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.963033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.963394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.963425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.963783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.963816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.964179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.964211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.964566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.964596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.964960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.964991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.965345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.965379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.965731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.965761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.966124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.966155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.966514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.966546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.966913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.966944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.967304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.967337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.854 qpair failed and we were unable to recover it. 00:33:34.854 [2024-11-20 06:43:54.967692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.854 [2024-11-20 06:43:54.967724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.968115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.968145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.968545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.968577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.968813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.968847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.969204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.969236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.969640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.969672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.970021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.970052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.970422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.970455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.970816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.970849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.971204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.971235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.971609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.971641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.971995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.972025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.972336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.972371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.972747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.972778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.973133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.973174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.973532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.973570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.973913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.973945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.974294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.974326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.974660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.974692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.975033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.975064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.975334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.975368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.975805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.975836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.976193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.976225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.976583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.976614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.976996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.977027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.977302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.977334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.977700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.977730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.978001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.978031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.978402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.978434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.978790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.978822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.979195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.979230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.979612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.979643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.980039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.855 [2024-11-20 06:43:54.980071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.855 qpair failed and we were unable to recover it. 00:33:34.855 [2024-11-20 06:43:54.980407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.980439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.980798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.980829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.981191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.981223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.981598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.981630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.981993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.982023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.982268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.982299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.982663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.982694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.983059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.983089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.983450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.983482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.983808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.983838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.984177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.984208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.984595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.984626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.984989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.985020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.985395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.985427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.985778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.985809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.986180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.986214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.986577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.986608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.986965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.986996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.987312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.987345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.987696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.987728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.988091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.988124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.988450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.988483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.988739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.988775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.989169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.989202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.989597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.989628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.989970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.990002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.990251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.990283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.990667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.990698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.991130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.991168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.991533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.991565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.991971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.992002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.992349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.992381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.992755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.992785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.856 qpair failed and we were unable to recover it. 00:33:34.856 [2024-11-20 06:43:54.993149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.856 [2024-11-20 06:43:54.993188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.993546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.993578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.993934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.993967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.994242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.994275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.994652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.994683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.994975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.995006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.995354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.995386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.995748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.995781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.996141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.996181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.996539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.996570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.996831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.996862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.997218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.997249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.997625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.997656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.998007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.998039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.998405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.998437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.998786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.998817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.999174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.999209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.999491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.999521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:54.999773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:54.999807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.000188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.000220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.000600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.000631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.000993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.001025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.001392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.001424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.001773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.001805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.002062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.002092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.002467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.002498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.002844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.002875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.003237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.003271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.003641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.003670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.004032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.004072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.004318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.004353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.004738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.004771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.005004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.005036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.005418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.005450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.005615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.005650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.857 [2024-11-20 06:43:55.005975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.857 [2024-11-20 06:43:55.006007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.857 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.006378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.006412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.006743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.006774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.007122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.007153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.007479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.007512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.007862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.007893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.008242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.008276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.008657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.008687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.009050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.009082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.009338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.009369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.009727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.009758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.010114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.010145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.010547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.010580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.010937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.010968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.011408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.011441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.011792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.011826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.012182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.012214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.012661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.012693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.013045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.013075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.013422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.013454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.013884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.013915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.014287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.014327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.014679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.014710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.015127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.015168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.015530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.858 [2024-11-20 06:43:55.015562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.858 qpair failed and we were unable to recover it. 00:33:34.858 [2024-11-20 06:43:55.015912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.015942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.016306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.016337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.016701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.016733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.017155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.017194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.017572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.017603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.017965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.017997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.018342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.018375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.018733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.018765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.019036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.019068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.019404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.019438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.019714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.019745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.020113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.020144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.020550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.020582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.020938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.020971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.021326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.021358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.021732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.021763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.022126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.022165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.022535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.022567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.022933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.022965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.023329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.023362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.023727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.023758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.024180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.024211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.024577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.024610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.024972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.025004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.025428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.025460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.025813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.025845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.026215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.026248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.026605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.026637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.026999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.027030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.027322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.027355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.027733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.027763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.028135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.028174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.028537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.028568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.028825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.028856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.029206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.029239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.029639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.029670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.030030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.030067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.859 [2024-11-20 06:43:55.030416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.859 [2024-11-20 06:43:55.030447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.859 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.030820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.030850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.031212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.031243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.031609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.031640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.031992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.032024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.032399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.032430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.032869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.032901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.033251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.033284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.033654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.033685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.034048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.034080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.034335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.034367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.034779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.034812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.035207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.035239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.035591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.035623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.035983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.036017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.036385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.036417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.036778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.036811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.037174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.037207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.037503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.037532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.037860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.037890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.038242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.038276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.038641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.038672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.039033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.039063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.039303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.039335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.039699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.039729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.040087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.040120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.040517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.040549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.040977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.041009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.041243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.041274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.041650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.041681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.042043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.042075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.042442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.042474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.042720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.042755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.043132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.043171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.043544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.043575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.043817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.043851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.044205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.044238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.860 qpair failed and we were unable to recover it. 00:33:34.860 [2024-11-20 06:43:55.044627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.860 [2024-11-20 06:43:55.044658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.045014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.045044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.045446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.045484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.045730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.045763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.046108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.046140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.046557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.046589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.046944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.046978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.047325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.047358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.047719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.047750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.048099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.048128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.048490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.048522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.048876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.048909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.049266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.049297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.049666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.049698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.050057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.050088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.050446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.050479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.050837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.050870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.051222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.051255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.051627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.051659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.052017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.052049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.052381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.052413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.052781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.052814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.053065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.053097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.053456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.053489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.053846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.053879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.054232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.054263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.054671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.054707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.055054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.055086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.055318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.055354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.055719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.055751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.056107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.056140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.056510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.056542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.056797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.056826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.057218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.057252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.057592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.057624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.057978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.058010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.058391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.058426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.058773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.861 [2024-11-20 06:43:55.058804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.861 qpair failed and we were unable to recover it. 00:33:34.861 [2024-11-20 06:43:55.059181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.059216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.059564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.059595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.059951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.059985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.060377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.060410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.060793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.060830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.061089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.061118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.061507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.061541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.061947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.061980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.062363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.062395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.062790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.062822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.063195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.063227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.063611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.063644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.064023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.064057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.064417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.064451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.064805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.064835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.065193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.065246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.065605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.065636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.065994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.066027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.066399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.066431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.066778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.066808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.067176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.067208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.067603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.067635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.067985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.068018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.068384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.068416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.068774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.068806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.069177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.069208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.069571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.069601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.069997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.070029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.070410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.070443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.070798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.070829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.071259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.071291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.071679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.071714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.071966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.072002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.072360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.072395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.072754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.072784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.073168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.073200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.075564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.862 [2024-11-20 06:43:55.075639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.862 qpair failed and we were unable to recover it. 00:33:34.862 [2024-11-20 06:43:55.075920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.075957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.076398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.076432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.076785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.076816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.077186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.077220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.077600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.077633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.077987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.078017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.078331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.078363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.078716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.078756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.079137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.079178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.079421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.079451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.079804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.079834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.080188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.080223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.080591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.080622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.080980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.081012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.081372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.081404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.081776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.081809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.082206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.082240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.082593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.082626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.082914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.082947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.083325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.083360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.083732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.083763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.084122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.084155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.084532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.084563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.084914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.084947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.085292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.085327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.085682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.085714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.086068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.086099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.086456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.086489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.086848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.086881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.087238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.087270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.087641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.087674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.088105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.088137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.088523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.088555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.863 qpair failed and we were unable to recover it. 00:33:34.863 [2024-11-20 06:43:55.088916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.863 [2024-11-20 06:43:55.088948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.089295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.089327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.089678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.089711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.089993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.090030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.090434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.090467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.090827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.090861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.091221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.091256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.091557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.091588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.091938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.091973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.092328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.092363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.092725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.092757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.093009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.093040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.093324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.093356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.095184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.095246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.095644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.095687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.096068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.096100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.096491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.096526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.096877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.096908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.097274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.097309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.097671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.097703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.097957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.097991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.098366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.098398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.098759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.098791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.099182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.099215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.099557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.099588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.099953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.099986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.100342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.100374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.100733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.100764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.102641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.102700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.103138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.103187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.103562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.103595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.103966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.103999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.104366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.104398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.104744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.104777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.105129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.105170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.105533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.105566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.105810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.105840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.106202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.864 [2024-11-20 06:43:55.106236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.864 qpair failed and we were unable to recover it. 00:33:34.864 [2024-11-20 06:43:55.106583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.865 [2024-11-20 06:43:55.106615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.865 qpair failed and we were unable to recover it. 00:33:34.865 [2024-11-20 06:43:55.106843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.865 [2024-11-20 06:43:55.106873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.865 qpair failed and we were unable to recover it. 00:33:34.865 [2024-11-20 06:43:55.107305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.865 [2024-11-20 06:43:55.107338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.865 qpair failed and we were unable to recover it. 00:33:34.865 [2024-11-20 06:43:55.107728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.865 [2024-11-20 06:43:55.107760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.865 qpair failed and we were unable to recover it. 00:33:34.865 [2024-11-20 06:43:55.108125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.865 [2024-11-20 06:43:55.108156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:34.865 qpair failed and we were unable to recover it. 00:33:35.137 [2024-11-20 06:43:55.108517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.137 [2024-11-20 06:43:55.108552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.137 qpair failed and we were unable to recover it. 00:33:35.137 [2024-11-20 06:43:55.108880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.137 [2024-11-20 06:43:55.108917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.137 qpair failed and we were unable to recover it. 00:33:35.137 [2024-11-20 06:43:55.109269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.137 [2024-11-20 06:43:55.109301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.137 qpair failed and we were unable to recover it. 00:33:35.137 [2024-11-20 06:43:55.109664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.109696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.110042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.110074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.110429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.110463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.110813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.110845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.111214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.111247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.111626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.111660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.112007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.112039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.112407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.112438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.112803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.112842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.113207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.113238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.113617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.113648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.114015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.114046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.114416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.114448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.114799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.114828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.115192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.115225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.115619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.115651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.115999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.116034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.116385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.116421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.116774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.116804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.117156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.117200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.117584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.117615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.117971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.118004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.118254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.118286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.118636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.118668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.119024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.119056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.120868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.120926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.121223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.121257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.121657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.121690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.122046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.122078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.122436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.122468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.122834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.122865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.123227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.123260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.123503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.123539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.123890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.123922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.124270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.124303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.124766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.124798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.125147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.125189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.138 qpair failed and we were unable to recover it. 00:33:35.138 [2024-11-20 06:43:55.125549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.138 [2024-11-20 06:43:55.125581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.125929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.125963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.126324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.126358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.128118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.128187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.128628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.128662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.128905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.128940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.129289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.129325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.129671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.129704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.130051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.130084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.130433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.130468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.130826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.130860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.131296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.131338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.131684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.131716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.132069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.132099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.132456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.132488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.132850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.132880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.133237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.133268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.133622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.133652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.134003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.134034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.134403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.134436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.134775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.134807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.135155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.135199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.135546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.135577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.135936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.135967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.136322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.136353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.136750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.136781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.137181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.137213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.137574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.137606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.137960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.137990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.138343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.138377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.138729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.138760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.139118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.139149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.139507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.139540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.139794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.139825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.140181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.140212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.140570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.140600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.140962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.140993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.141349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.139 [2024-11-20 06:43:55.141381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-20 06:43:55.141745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.141777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.142136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.142191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.142551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.142582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.142934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.142964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.143312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.143344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.143698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.143728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.144084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.144115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.144455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.144486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.144881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.144911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.145148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.145191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.145589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.145620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.145972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.146004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.146341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.146374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.146737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.146774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.147125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.147157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.147525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.147556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.147936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.147966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.148318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.148350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.148555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.148589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.148933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.148964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.149315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.149346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.149708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.149738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.150097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.150129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.150518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.150549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.150897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.150929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.151282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.151313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.151751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.151782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.152133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.152176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.152557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.152588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.152943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.152975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.153334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.153367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.153743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.153774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.154136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.154178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.154557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.154589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.154951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.154983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.155341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.155375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.155737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.140 [2024-11-20 06:43:55.155769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-20 06:43:55.156123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.156155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.156519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.156552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.156913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.156943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.157303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.157337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.157704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.157736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.158103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.158133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.158501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.158533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.158901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.158932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.159288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.159319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.159676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.159708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.160054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.160085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.160429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.160460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.160812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.160842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.161213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.161246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.164243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.164310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.164639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.164681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.165062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.165103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.165459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.165494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.165854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.165886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.166320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.166353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.166704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.166736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.167093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.167126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.167531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.167565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.167924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.167956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.168392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.168425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.168772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.168803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.169180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.169212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.169626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.169659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.171306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.171374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.171719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.171756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.172142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.172189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.172541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.172572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.172926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.172959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.173306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.173338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.173699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.173731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.174090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.174123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.177139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.177217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.177607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.141 [2024-11-20 06:43:55.177641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.141 qpair failed and we were unable to recover it. 00:33:35.141 [2024-11-20 06:43:55.177984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.178018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.178380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.178415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.178775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.178807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.179181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.179215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.179583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.179613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.179969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.180001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.180347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.180382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.180723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.180754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.181115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.181148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.181495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.181527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.181888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.181919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.182289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.182327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.182717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.182748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.183105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.183137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.183507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.183539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.183947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.183979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.184340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.184374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.184729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.184762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.185120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.185168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.185543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.185574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.185935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.185966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.186388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.186425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.186792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.186824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.187197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.187228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.187590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.187621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.187981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.188013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.188348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.188379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.188744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.188775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.189140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.189180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.189517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.189547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.189898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.189929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.190267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.142 [2024-11-20 06:43:55.190303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.142 qpair failed and we were unable to recover it. 00:33:35.142 [2024-11-20 06:43:55.190681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.190714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.191065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.191096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.191519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.191552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.191906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.191937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.192296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.192329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.192681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.192712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.193082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.193113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.193471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.193505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.193826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.193856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.194191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.194224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.194569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.194600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.194961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.194993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.195369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.195400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.195757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.195789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.196141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.196181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.196566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.196596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.196962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.196992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.197380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.197414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.197767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.197800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.198157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.198212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.198562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.198595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.198953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.198985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.199402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.199435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.199773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.199806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.200174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.200207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.200488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.200520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.200911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.200949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.201299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.201333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.201697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.201729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.202073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.202104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.202470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.202507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.202871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.202902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.203263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.203297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.203655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.203685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.204087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.204118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.204517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.204550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.204905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.204937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.205310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.143 [2024-11-20 06:43:55.205342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.143 qpair failed and we were unable to recover it. 00:33:35.143 [2024-11-20 06:43:55.205716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.205747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.206135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.206182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.206580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.206612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.206960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.206993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.207336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.207370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.207725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.207760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.208110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.208143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.208524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.208556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.208931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.208964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.210899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.210965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.211373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.211413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.211803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.211837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.212194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.212228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.212614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.212645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.212997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.213029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.213393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.213425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.213656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.213688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.214077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.214107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.214471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.214505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.214861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.214894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.215248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.215281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.215538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.215575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.215957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.215988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.216299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.216330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.216696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.216727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.217092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.217124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.217513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.217548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.217888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.217919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.218285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.218326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.218680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.218712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.219073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.219105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.219451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.219485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.219846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.219878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.220258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.220290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.220666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.220699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.221040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.221072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.221472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.144 [2024-11-20 06:43:55.221506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.144 qpair failed and we were unable to recover it. 00:33:35.144 [2024-11-20 06:43:55.221753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.221783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.222127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.222157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.222536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.222567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.222722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.222754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.223120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.223152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.223541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.223574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.223980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.224010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.224384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.224417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.224768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.224799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.225097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.225127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.225530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.225564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.225918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.225950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.226284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.226320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.226574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.226607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.226973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.227003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.227259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.227291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.227679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.227710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.228077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.228109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.228475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.228509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.228860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.228890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.229231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.229263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.229627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.229657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.230007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.230038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.230287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.230322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.230640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.230671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.230925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.230958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.231441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.231472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.231829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.231861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.232225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.232257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.232633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.232664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.233017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.233048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.233434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.233468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.233754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.145 [2024-11-20 06:43:55.233786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.145 qpair failed and we were unable to recover it. 00:33:35.145 [2024-11-20 06:43:55.234175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.234210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.234561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.234595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.234962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.234996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.235138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.235197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.235633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.235665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.236096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.236127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.236536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.236568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.236923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.236954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.237219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.237251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.237611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.237641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.237879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.237914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.238200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.238233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.238608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.238640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.239005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.239036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.239461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.239493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.239847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.239877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.240024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.240058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.240478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.240511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.240881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.240913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.241290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.241325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.241719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.241750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.242112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.242143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.242579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.242610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.242985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.243017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.243398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.243430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.243792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.243829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.244193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.244226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.244487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.244521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.244869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.244900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.245251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.245283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.245641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.245672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.245956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.245986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.246326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.246358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.246730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.246764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.247118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.247150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.247550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.247581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.247838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.247870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.248109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.146 [2024-11-20 06:43:55.248143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.146 qpair failed and we were unable to recover it. 00:33:35.146 [2024-11-20 06:43:55.248562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.248594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.248991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.249023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.249391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.249425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.249799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.249831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.250197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.250228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.250631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.250662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.251023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.251055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.251413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.251445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.251811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.251841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.252291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.252323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.252671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.252702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.253056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.253088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.253476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.253508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.253882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.253912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.254304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.254338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.254702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.254734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.255090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.255123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.255576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.255609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.256009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.256041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.256430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.256462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.256821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.256854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.257301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.257334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.257721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.257751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.258148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.258195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.258568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.258599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.258820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.258851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.259085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.259115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.259376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.259417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.259843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.259875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.260283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.260315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.260674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.260705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.261062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.261092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.261459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.261492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.261748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.261779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.262122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.262153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.262561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.262592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.262949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.262980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.263302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.147 [2024-11-20 06:43:55.263334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.147 qpair failed and we were unable to recover it. 00:33:35.147 [2024-11-20 06:43:55.263713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.263743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.264103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.264135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.264544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.264578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.264944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.264977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.265334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.265368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.265742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.265773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.266134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.266176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.266335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.266370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.266754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.266787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.267134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.267197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.267569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.267600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.267971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.268002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.268361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.268395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.268746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.268778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.269147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.269203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.269566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.269596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.269969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.270000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.270392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.270427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.270799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.270830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.271205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.271236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.271626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.271658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.272017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.272048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.272449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.272482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.272713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.272747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.273105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.273139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.273541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.273573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.273944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.273978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.274348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.274383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.274753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.274785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.275151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.275198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.275561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.275593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.275961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.275993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.276353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.276385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.276807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.276839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.277201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.277234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.277490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.277521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.277952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.277983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.278386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.148 [2024-11-20 06:43:55.278418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.148 qpair failed and we were unable to recover it. 00:33:35.148 [2024-11-20 06:43:55.278847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.278879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.279129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.279168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.279429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.279465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.279829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.279861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.280211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.280244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.280643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.280675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.281091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.281122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.281440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.281473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.281855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.281888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.282251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.282282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.282756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.282787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.283144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.283185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.283382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.283412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.283786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.283816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.284178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.284212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.284630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.284661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.284999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.285032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.285441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.285473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.285646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.285677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.286082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.286113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.286503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.286536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.286896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.286928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.287286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.287319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.287718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.287751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.288097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.288127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.288401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.288434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.288802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.288835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.289251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.289283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.289655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.289687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.289975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.290007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.290308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.290340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.290615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.290651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.291043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.149 [2024-11-20 06:43:55.291075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.149 qpair failed and we were unable to recover it. 00:33:35.149 [2024-11-20 06:43:55.291418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.291450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.291809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.291843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.292089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.292121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.292493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.292525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.292908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.292940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.293237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.293269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.293640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.293672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.294069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.294100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.294482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.294516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.294863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.294894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.295276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.295309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.295589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.295620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.295973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.296005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.296444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.296477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.296817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.296848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.297217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.297248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.297626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.297658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.298028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.298058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.298423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.298457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.298806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.298836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.299207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.299239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.299632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.299662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.299917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.299948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.300217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.300253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.300624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.300657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.301007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.301041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.301470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.301502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.301844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.301878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.302228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.302261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.302636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.302667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.303008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.303039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.303380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.303412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.303769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.303800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.304151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.304190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.304564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.304594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.304887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.304918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.305200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.305231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.305516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.305545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.150 qpair failed and we were unable to recover it. 00:33:35.150 [2024-11-20 06:43:55.305961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.150 [2024-11-20 06:43:55.305998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.306329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.306364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.306738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.306772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.307183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.307216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.307599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.307630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.307991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.308022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.308274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.308309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.308685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.308719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.309072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.309103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.309468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.309501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.309918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.309951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.310345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.310377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.310748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.310780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.311114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.311146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.311529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.311562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.311926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.311960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.312333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.312366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.312644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.312676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.312933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.312964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.313338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.313372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.313734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.313766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.314115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.314148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.314516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.314547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.314927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.314958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.315297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.315330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.315704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.315735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.316086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.316117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.316413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.316449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.316797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.316828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.317063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.317098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.317461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.317495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.317855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.317887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.318149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.318207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.318605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.318636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.318875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.318906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.319252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.319285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.319700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.319731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.320062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.320094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.151 qpair failed and we were unable to recover it. 00:33:35.151 [2024-11-20 06:43:55.320367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.151 [2024-11-20 06:43:55.320401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.320747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.320780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.321136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.321182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.321588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.321620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.321985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.322017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.322272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.322303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.322697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.322729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.323093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.323124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.323401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.323433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.323817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.323849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.324211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.324245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.324615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.324648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.325003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.325035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.325442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.325473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.325821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.325851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.326239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.326270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.326636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.326667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.327019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.327051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.327410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.327442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.327886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.327917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.328296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.328329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.328715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.328746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.329165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.329199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.329569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.329601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.329968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.329999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.330275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.330307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.330669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.330699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.330947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.330981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.331330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.331361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.331718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.331749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.332112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.332143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.332524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.332557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.332929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.332959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.333348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.333381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.333755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.333787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.334191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.334223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.334480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.334512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.334858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.334888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.152 qpair failed and we were unable to recover it. 00:33:35.152 [2024-11-20 06:43:55.335240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.152 [2024-11-20 06:43:55.335271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.335637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.335669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.336038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.336069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.336424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.336456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.336795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.336834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.337188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.337221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.337611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.337641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.338062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.338094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.338475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.338507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.338885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.338916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.339199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.339229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.339613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.339644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.339997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.340028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.340401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.340433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.340669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.340703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.340940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.340974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.341237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.341270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.341651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.341684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.342067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.342099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.342452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.342486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.342840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.342871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.343241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.343275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.343631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.343663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.344025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.344057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.344421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.344452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.344878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.344910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.345250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.345283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.345670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.345701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.345928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.345961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.346247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.346279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.346640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.346671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.347085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.347117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.347354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.347389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.347766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.347798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.348168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.348201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.348611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.348641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.348993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.349024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.349401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.349434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.153 [2024-11-20 06:43:55.349786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.153 [2024-11-20 06:43:55.349816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.153 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.350195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.350228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.350571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.350603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.350957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.350989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.351362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.351393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.351787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.351819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.352179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.352219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.352605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.352636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.353039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.353071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.353347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.353379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.353751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.353782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.354184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.354217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.354604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.354636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.354908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.354940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.355310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.355344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.355698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.355730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.356102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.356133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.356488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.356520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.356880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.356912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.357301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.357334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.357704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.357736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.358095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.358126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.358518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.358551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.358904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.358937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.359299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.359331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.359709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.359740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.359981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.360011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.360263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.360295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.360666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.360696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.360954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.360984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.361349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.361382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.361746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.361776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.362142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.154 [2024-11-20 06:43:55.362181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.154 qpair failed and we were unable to recover it. 00:33:35.154 [2024-11-20 06:43:55.362553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.362584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.362938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.362969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.363375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.363407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.363773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.363805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.364167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.364199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.364580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.364611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.364929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.364962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.365273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.365304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.365668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.365700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.366055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.366088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.366468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.366501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.366767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.366796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.367123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.367154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.367559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.367596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.367950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.367980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.368351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.368383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.368746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.368778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.369139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.369180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.369575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.369605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.369961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.369993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.370348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.370380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.370729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.370760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.371173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.371206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.371478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.371508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.371870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.371901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.372175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.372205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.372581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.372612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.372960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.372993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.373440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.373472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.373809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.373840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.374200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.374231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.374640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.374670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.375026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.375056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.375412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.375444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.375689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.375719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.376124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.376157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.376548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.376579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.376951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.155 [2024-11-20 06:43:55.376983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.155 qpair failed and we were unable to recover it. 00:33:35.155 [2024-11-20 06:43:55.377386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.377417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.377787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.377820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.378231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.378265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.378634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.378666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.378943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.378975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.379349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.379381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.379744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.379777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.380019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.380054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.380421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.380454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.380724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.380757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.380931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.380963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.381288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.381321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.381548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.381583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.381952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.381984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.382413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.382447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.382824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.382863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.383324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.383357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.383714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.383746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.384104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.384135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.384519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.384552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.384913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.384945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.385309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.385342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.385583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.385614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.385969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.386001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.386357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.386392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.386740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.386771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.387132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.387173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.387551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.387584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.387930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.387961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.388330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.388363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.388717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.388749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.389097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.389129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.389519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.389551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.389909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.389940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.390273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.390306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.390536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.390571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.390946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.390976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.391345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.391377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.156 qpair failed and we were unable to recover it. 00:33:35.156 [2024-11-20 06:43:55.391748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.156 [2024-11-20 06:43:55.391779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.392137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.392174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.392567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.392598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.393046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.393077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.393415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.393450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.393802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.393834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.394098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.394129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.394472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.394503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.394857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.394889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.395238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.395268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.395642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.395675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.396038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.396070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.396419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.396450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.396805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.396837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.397206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.397239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.397599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.397630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.397842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.397872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.398220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.398257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.398648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.398679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.399036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.399068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.399441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.399473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.399831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.399862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.400225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.400258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.400595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.400625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.400998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.401028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.401364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.401397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.401756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.401787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.157 [2024-11-20 06:43:55.402143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.157 [2024-11-20 06:43:55.402183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.157 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.402619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.402652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.403000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.403034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.403419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.403453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.403800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.403833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.404183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.404216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.404457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.404488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.404887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.404919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.405329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.405361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.405721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.405754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.405995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.406029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.406379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.406412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.406790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.406821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.407065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.407096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.407446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.407478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.407818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.407848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.408219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.408253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.408641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.408672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.409023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.409054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.409420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.409452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.409804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.409836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.410246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.410280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.410634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.410664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.410945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.410976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.411340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.411372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.411685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.411715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.411895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.411927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.412296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.412328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.412639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.412670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.413019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.413049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.413236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.431 [2024-11-20 06:43:55.413274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.431 qpair failed and we were unable to recover it. 00:33:35.431 [2024-11-20 06:43:55.413640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.413671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.414010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.414041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.414398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.414431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.414770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.414801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.415210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.415242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.415457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.415488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.415788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.415818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.416193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.416226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.416598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.416632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.416996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.417026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.417277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.417307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.417669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.417699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.418050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.418081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.418435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.418468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.418826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.418858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.419223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.419256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.419554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.419586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.419934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.419967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.420264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.420296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.420563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.420593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.420948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.420979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.421373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.421406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.421648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.421679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.421950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.421982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.422228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.422261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.422635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.422669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.423046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.423078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.423448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.423482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.423838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.423869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.424196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.424230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.424611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.424642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.425015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.425045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.425450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.425483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.425835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.425864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.426117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.426147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.426442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.426472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.426831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.426861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.432 [2024-11-20 06:43:55.427223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.432 [2024-11-20 06:43:55.427255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.432 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.427612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.427642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.428018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.428049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.428423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.428455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.428803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.428835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.429205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.429238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.429605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.429638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.430004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.430034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.430352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.430387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.430809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.430839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.431202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.431237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.431636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.431666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.431927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.431957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.432320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.432352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.432729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.432761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.433117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.433149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.433410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.433442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.433820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.433850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.434287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.434320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.434680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.434711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.435068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.435100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.435468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.435500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.435760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.435790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.436200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.436232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.436531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.436564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.436917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.436949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.437309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.437343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.437705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.437736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.438085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.438116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.438508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.438552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.438901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.438933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.439206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.439239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.439601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.439632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.440002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.440033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.440395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.440429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.440787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.440819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.441183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.441215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.441597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.433 [2024-11-20 06:43:55.441627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.433 qpair failed and we were unable to recover it. 00:33:35.433 [2024-11-20 06:43:55.442057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.442088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.442479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.442511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.442864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.442896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.443329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.443361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.443729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.443760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.444131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.444171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.444550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.444582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.444844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.444875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.445228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.445261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.445636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.445667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.446027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.446059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.446343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.446376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.446748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.446779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.447134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.447174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.447537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.447567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.447939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.447969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.448249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.448281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.448662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.448693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.449046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.449078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.449505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.449537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.449767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.449801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.450181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.450212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.450599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.450631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.451030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.451060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.451427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.451460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.451800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.451831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.452190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.452222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.452593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.452625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.452967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.452999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.453393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.453424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.453765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.453796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.454150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.454195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.454580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.454611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.454954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.454986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.455215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.455251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.455492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.455522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.455770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.455801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.456217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.434 [2024-11-20 06:43:55.456249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.434 qpair failed and we were unable to recover it. 00:33:35.434 [2024-11-20 06:43:55.456604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.456635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.456979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.457009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.457262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.457297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.457673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.457703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.458071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.458102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.458490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.458524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.458895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.458925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.459281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.459315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.459685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.459717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.459995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.460026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.460367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.460398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.460768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.460800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.461168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.461202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.461563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.461593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.461999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.462029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.462395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.462428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.462784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.462814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.463231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.463263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.463615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.463647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.464029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.464059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.464356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.464388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.464740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.464770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.465130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.465169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.465543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.465575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.465933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.465964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.466331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.466363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.466721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.466753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.467122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.467154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.467547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.467578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.467930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.467963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.468313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.468345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.468708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.468739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.469105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.435 [2024-11-20 06:43:55.469136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.435 qpair failed and we were unable to recover it. 00:33:35.435 [2024-11-20 06:43:55.469536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.469574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.469940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.469972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.470264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.470297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.470653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.470684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.471051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.471082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.471433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.471466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.471806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.471837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.472200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.472232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.472620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.472652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.473137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.473177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.473568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.473601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.474037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.474068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.474410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.474444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.474806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.474836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.475277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.475310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.475671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.475702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.476072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.476104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.476541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.476574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.476943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.476973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.477343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.477377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.477738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.477769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.478200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.478232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.478622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.478654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.479033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.479066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.479483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.479514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.479797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.479828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.480198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.480231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.480594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.480625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.480979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.481011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.481401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.481433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.481676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.481710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.482077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.482106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.482492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.482526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.482880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.482911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.483247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.483278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.483660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.483692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.484051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.484083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.484469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.436 [2024-11-20 06:43:55.484502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.436 qpair failed and we were unable to recover it. 00:33:35.436 [2024-11-20 06:43:55.484874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.484905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.485255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.485288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.485614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.485652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.485895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.485926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.486269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.486301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.486681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.486712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.487069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.487102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.487498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.487530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.487888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.487920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.488189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.488221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.488581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.488613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.488963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.488995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.489402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.489435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.489816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.489848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.490215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.490247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.490620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.490651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.490879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.490910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.491240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.491272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.491639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.491670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.492044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.492076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.492441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.492474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.492835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.492868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.493232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.493265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.493647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.493678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.494046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.494077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.494354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.494387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.494749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.494779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.495128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.495166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.495439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.495470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.495801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.495832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.496216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.496250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.496619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.496651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.497007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.497037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.497410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.497443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.497812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.497843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.498096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.498125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.498449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.498480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.437 qpair failed and we were unable to recover it. 00:33:35.437 [2024-11-20 06:43:55.498837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.437 [2024-11-20 06:43:55.498869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.499259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.499291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.499656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.499688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.499941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.499971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.500243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.500279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.500546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.500584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.500925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.500957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.501315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.501347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.501695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.501726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.502077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.502108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.502363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.502394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.502672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.502703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.503053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.503083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.503376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.503406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.503655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.503686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.504057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.504088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.504448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.504483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.504728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.504759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.504997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.505032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.505326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.505359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.505727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.505759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.506113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.506143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.506507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.506539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.506881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.506912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.507289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.507321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.507702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.507733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.508082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.508114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.508507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.508538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.508884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.508916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.509306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.509340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.509623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.509653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.509876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.509907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.510280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.510312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.510676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.510705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.511126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.511167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.511524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.511556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.511784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.511820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.512187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.438 [2024-11-20 06:43:55.512238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.438 qpair failed and we were unable to recover it. 00:33:35.438 [2024-11-20 06:43:55.512553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.512588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.512952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.512985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.513332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.513366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.513729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.513762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.514185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.514218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.514585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.514617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.514963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.514997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.515343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.515390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.515744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.515775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.516168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.516202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.516602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.516634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.516955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.516987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.517398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.517434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.517782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.517814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.518141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.518184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.518503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.518535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.518883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.518916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.519155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.519200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.519480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.519513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.519764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.519799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.520043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.520076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.520451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.520485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.520838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.520869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.521243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.521277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.521668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.521701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.522051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.522084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.522381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.522415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.522774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.522808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.523199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.523232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.523589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.523621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.523980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.524014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.524482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.524515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.524867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.524899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.525194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.525227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.525625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.525659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.526051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.526082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.526377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.526410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.526767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.526796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.439 qpair failed and we were unable to recover it. 00:33:35.439 [2024-11-20 06:43:55.527169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.439 [2024-11-20 06:43:55.527202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.527612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.527644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.528038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.528069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.528342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.528375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.528658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.528688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.529037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.529070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.529424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.529456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.529828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.529859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.530224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.530256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.530646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.530682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.531048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.531080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.531429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.531459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.531823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.531856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.532257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.532290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.532498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.532531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.532934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.532965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.533254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.533288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.533674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.533705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.534072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.534104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.534464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.534496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.534843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.534877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.535242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.535276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.535654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.535686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.536044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.536078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.536393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.536426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.536838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.536871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.537230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.537264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.537647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.537678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.538049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.538082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.538461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.538494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.538863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.538895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.539336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.539369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.539727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.539758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.540109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.440 [2024-11-20 06:43:55.540145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.440 qpair failed and we were unable to recover it. 00:33:35.440 [2024-11-20 06:43:55.540523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.540555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.540920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.540951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.541322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.541355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.541743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.541775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.542182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.542217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.542657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.542688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.543031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.543064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.543326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.543363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.543727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.543760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.544126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.544167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.544469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.544500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.544850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.544883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.545245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.545279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.545679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.545711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.546075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.546107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.546489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.546528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.546880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.546913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.547290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.547323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.547680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.547712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.548084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.548114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.548479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.548511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.548866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.548897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.549257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.549290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.549662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.549693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.550055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.550087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.550347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.550383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.550726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.550759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.551117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.551148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.551540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.551571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.551929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.551960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.552315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.552347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.552706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.552737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.553090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.553122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.553440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.553473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.553806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.553838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.554194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.554227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.554582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.554613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.554961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.441 [2024-11-20 06:43:55.554994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.441 qpair failed and we were unable to recover it. 00:33:35.441 [2024-11-20 06:43:55.555369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.555402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.555749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.555782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.556132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.556170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.556534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.556566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.556921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.556951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.557318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.557352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.557702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.557733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.557980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.558012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.558373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.558405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.558773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.558805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.559168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.559200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.559559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.559589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.559955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.559985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.560384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.560419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.560781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.560811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.561184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.561214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.561611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.561642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.562064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.562101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.562471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.562503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.562873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.562904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.563278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.563311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.563679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.563710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.564075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.564107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.564528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.564560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.564927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.564958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.565335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.565368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.565723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.565753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.566113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.566145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.566483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.566514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.566868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.566899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.567256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.567287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.567651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.567684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.568054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.568084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.568452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.568486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.568848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.568878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.569119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.569153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.569571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.442 [2024-11-20 06:43:55.569603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.442 qpair failed and we were unable to recover it. 00:33:35.442 [2024-11-20 06:43:55.569799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.569829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.570191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.570223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.570650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.570682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.571045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.571075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.571428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.571460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.571827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.571861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.572214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.572245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.572489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.572522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.572946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.572977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.573341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.573375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.573778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.573810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.574009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.574039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.574449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.574481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.574855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.574887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.575237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.575269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.575533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.575563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.575848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.575880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.576239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.576273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.576654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.576684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.577054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.577085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.577462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.577501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.577750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.577780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.578166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.578199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.578568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.578600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.579084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.579115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.579480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.579513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.579865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.579896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.580125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.580157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.580555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.580585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.580935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.580968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.581236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.581269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.581658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.581688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.581932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.581963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.582306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.582340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.582715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.582746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.582986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.583016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.583388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.583419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.583776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.443 [2024-11-20 06:43:55.583807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.443 qpair failed and we were unable to recover it. 00:33:35.443 [2024-11-20 06:43:55.584168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.584199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.584587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.584616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.584971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.585002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.585354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.585385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.585759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.585790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.586039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.586069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.586467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.586499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.586860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.586891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.587348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.587379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.587740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.587773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.588140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.588180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.588597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.588629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.588990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.589023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.589399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.589433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.589817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.589848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.590292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.590324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.590688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.590719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.591087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.591118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.591417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.591449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.591815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.591846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.592216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.592249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.592621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.592650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.592993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.593034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.593400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.593433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.593787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.593819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.594210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.594242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.594677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.594708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.595050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.595083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.595487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.595518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.595684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.595715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.596057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.596088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.596467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.596499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.596738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.444 [2024-11-20 06:43:55.596767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.444 qpair failed and we were unable to recover it. 00:33:35.444 [2024-11-20 06:43:55.597133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.597175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.597433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.597464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.597820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.597851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.598224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.598258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.598608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.598638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.598863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.598892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.599292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.599323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.599697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.599727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.600003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.600033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.600406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.600438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.600697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.600732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.601072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.601104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.601519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.601552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.601905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.601936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.602323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.602357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.602743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.602775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.603071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.603101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.603527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.603558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.603949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.603980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.604334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.604366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.604723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.604755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.605117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.605150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.605376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.605411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.605754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.605786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.606136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.606193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.606558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.606588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.606933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.606966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.607326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.607358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.607707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.607739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.608096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.608132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.608512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.608543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.608790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.608821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.609064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.609095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.609476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.609508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.609764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.609794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.610042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.610076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.610433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.610466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.610810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.445 [2024-11-20 06:43:55.610840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.445 qpair failed and we were unable to recover it. 00:33:35.445 [2024-11-20 06:43:55.611223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.611257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.611642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.611673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.612019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.612050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.612211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.612246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.612522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.612551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.612905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.612936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.613270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.613303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.613682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.613713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.614088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.614121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.614419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.614450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.614799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.614831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.615189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.615221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.615454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.615485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.615831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.615862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.616243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.616274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.616637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.616667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.617028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.617059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.617233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.617264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.617636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.617667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.618015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.618048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.618288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.618320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.618696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.618727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.619075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.619109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.619448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.619481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.619830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.619862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.620241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.620274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.620622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.620653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.621019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.621052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.621350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.621382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.621629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.621663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.622076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.622108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.622478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.622510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.622872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.622905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.623304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.623336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.623697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.623729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.624093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.624124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.624550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.624582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.624942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.446 [2024-11-20 06:43:55.624977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.446 qpair failed and we were unable to recover it. 00:33:35.446 [2024-11-20 06:43:55.625331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.625363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.625721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.625751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.626155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.626196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.626571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.626602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.626967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.626998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.627349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.627380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.627791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.627821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.628225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.628258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.628644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.628676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.629032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.629062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.629419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.629453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.629706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.629737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.630112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.630144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.630401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.630436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.630775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.630807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.631165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.631199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.631580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.631611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.631966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.631999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.632356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.632389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.632752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.632782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.633186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.633225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.633601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.633635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.633990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.634022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.634400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.634432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.634693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.634723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.635094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.635125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.635487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.635520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.635876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.635909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.636242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.636274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.636660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.636691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.636929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.636958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.637344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.637376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.637782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.637814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.638177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.638209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.638596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.638627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.638862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.638893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.639200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.639231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.447 qpair failed and we were unable to recover it. 00:33:35.447 [2024-11-20 06:43:55.639629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.447 [2024-11-20 06:43:55.639661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.640025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.640057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.640475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.640507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.640751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.640785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.641030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.641061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.641321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.641352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.641690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.641721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.642089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.642119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.642464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.642499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.642845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.642875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.643227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.643260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.643614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.643645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.644005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.644036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.644289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.644321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.644681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.644712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.644955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.644987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.645344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.645375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.645746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.645777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.646129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.646168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.646530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.646562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.646926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.646956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.647222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.647254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.647607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.647638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.647889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.647926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.648304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.648336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.648621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.648652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.648897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.648928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.649183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.649215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.649606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.649638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.649989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.650021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.650378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.650411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.650773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.650803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.651049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.651079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.651360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.651392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.651635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.651664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.652046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.652076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.652432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.652464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.652829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.448 [2024-11-20 06:43:55.652861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.448 qpair failed and we were unable to recover it. 00:33:35.448 [2024-11-20 06:43:55.653142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.653184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.653447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.653477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.653764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.653794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.654179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.654211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.654599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.654631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.654977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.655007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.655420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.655453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.655825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.655856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.656202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.656234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.656591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.656621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.656985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.657016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.657403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.657436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.657826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.657858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.658220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.658252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.658615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.658646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.658892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.658922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.659262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.659295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.659745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.659777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.660142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.660181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.660613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.660644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.660999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.661031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.661362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.661394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.661763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.661794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.662153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.662193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.662513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.662543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.662897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.662934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.663228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.663260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.663660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.663691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.664056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.664087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.664389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.664421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.664803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.664834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.665184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.665218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.665638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.665670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.665910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.449 [2024-11-20 06:43:55.665944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.449 qpair failed and we were unable to recover it. 00:33:35.449 [2024-11-20 06:43:55.666325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.666357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.666766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.666797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.667154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.667197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.667465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.667495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.667866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.667897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.668296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.668330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.668701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.668732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.669138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.669192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.669567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.669598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.669957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.669989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.670399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.670430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.670774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.670805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.671178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.671210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.671569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.671601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.671857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.671890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.672273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.672307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.672607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.672638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.673050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.673080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.673406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.673444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.673674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.673708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.674071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.674103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.674476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.674509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.674747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.674777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.675169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.675201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.675578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.675609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.675895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.675925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.676310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.676343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.676619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.676649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.676998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.677029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.677407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.677441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.677808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.677840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.678202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.678244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.678528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.678558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.678912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.678944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.679307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.679339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.679705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.679737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.680104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.680134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.680562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.450 [2024-11-20 06:43:55.680594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.450 qpair failed and we were unable to recover it. 00:33:35.450 [2024-11-20 06:43:55.680823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.680856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.681219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.681252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.681606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.681638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.682003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.682036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.682294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.682328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.682685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.682717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.683124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.683155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.683416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.683447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.683801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.683834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.684250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.684282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.684660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.684691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.685052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.685085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.685464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.685497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.685872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.685906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.686196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.686227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.686588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.686622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.686970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.687001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.687395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.687428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.687779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.687811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.688177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.688210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.688591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.688624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.688967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.688999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.689395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.689430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.689774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.689805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.690176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.690209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.690580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.690612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.691016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.691046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.691285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.691316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.691661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.691692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.692096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.692128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.692514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.692548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.692926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.692958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.693322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.693356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.693728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.693764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.694112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.694143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.694527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.694558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.694924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.694956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.451 [2024-11-20 06:43:55.695321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.451 [2024-11-20 06:43:55.695353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.451 qpair failed and we were unable to recover it. 00:33:35.724 [2024-11-20 06:43:55.695715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.724 [2024-11-20 06:43:55.695750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.724 qpair failed and we were unable to recover it. 00:33:35.724 [2024-11-20 06:43:55.696109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.724 [2024-11-20 06:43:55.696141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.724 qpair failed and we were unable to recover it. 00:33:35.724 [2024-11-20 06:43:55.696511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.724 [2024-11-20 06:43:55.696543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.724 qpair failed and we were unable to recover it. 00:33:35.724 [2024-11-20 06:43:55.696899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.724 [2024-11-20 06:43:55.696931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.724 qpair failed and we were unable to recover it. 00:33:35.724 [2024-11-20 06:43:55.697301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.697335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.697708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.697739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.698109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.698138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.698512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.698545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.698964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.698996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.699355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.699389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.699742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.699773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.700117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.700149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.700621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.700653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.700914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.700944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.701231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.701262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.701609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.701641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.701986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.702019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.702407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.702438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.702810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.702841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.703083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.703113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.703483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.703515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.703883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.703915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.704289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.704322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.704671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.704702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.705055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.705086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.705384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.705417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.705670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.705699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.706057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.706088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.706529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.706562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.706799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.706835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.707240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.707273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.707508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.707542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.707796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.707828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.708179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.708212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.708629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.708659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.709009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.709047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.709458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.709492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.709748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.709779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.710145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.710185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.710552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.710583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.725 qpair failed and we were unable to recover it. 00:33:35.725 [2024-11-20 06:43:55.710952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.725 [2024-11-20 06:43:55.710983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.711394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.711426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.711798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.711830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.712209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.712241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.712635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.712667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.713033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.713065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.713437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.713471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.713812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.713845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.714136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.714184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.714473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.714507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.714879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.714910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.715287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.715320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.715585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.715617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.715900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.715930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.716213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.716244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.716633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.716666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.717008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.717040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.717288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.717321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.717682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.717713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.718079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.718111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.718488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.718521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.718876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.718908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.719242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.719275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.719647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.719679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.720049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.720080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.720420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.720454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.720702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.720733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.721100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.721130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.721524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.721556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.721922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.721955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.722222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.722256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.722658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.722690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.723040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.723071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.723322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.723354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.723752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.723782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.724143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.724200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.724593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.724624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.724972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.726 [2024-11-20 06:43:55.725005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.726 qpair failed and we were unable to recover it. 00:33:35.726 [2024-11-20 06:43:55.725341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.725372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.725736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.725768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.726145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.726198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.726655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.726687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.727039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.727071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.727432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.727465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.727811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.727842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.728249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.728282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.728658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.728689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.729057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.729088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.729452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.729483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.729728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.729760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.729936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.729968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.730227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.730259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.730631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.730662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.731026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.731057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.731283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.731317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.731700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.731731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.732155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.732196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.732578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.732610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.732966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.732997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.733353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.733387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.733757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.733789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.734021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.734053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.734397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.734432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.734786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.734816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.735181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.735214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.735570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.735603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.735959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.735990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.736291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.736322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.736697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.736728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.736967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.737001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.737215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.737248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.737626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.737658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.738018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.738053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.738454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.738485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.738834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.738866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.727 qpair failed and we were unable to recover it. 00:33:35.727 [2024-11-20 06:43:55.739230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.727 [2024-11-20 06:43:55.739268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.739641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.739673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.740023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.740058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.740415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.740448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.740697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.740730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.741166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.741199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.741578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.741610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.741969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.742001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.742357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.742391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.742625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.742658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.743033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.743064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.743242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.743274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.743652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.743682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.743955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.743986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.744324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.744356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.744704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.744736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.745022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.745053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.745421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.745452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.745696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.745726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.746084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.746115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.746506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.746538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.746904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.746936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.747398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.747431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.747778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.747809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.748228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.748261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.748649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.748683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.749052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.749084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.749442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.749476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.749846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.749877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.750244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.750277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.750671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.750704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.751059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.751090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.751435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.751468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.751892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.751923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.752295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.752331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.752672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.752703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.753052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.753083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.753393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.728 [2024-11-20 06:43:55.753425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.728 qpair failed and we were unable to recover it. 00:33:35.728 [2024-11-20 06:43:55.753669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.753705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.754060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.754093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.754380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.754420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.754708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.754739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.755092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.755130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.755524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.755557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.755906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.755939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.756275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.756308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.756702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.756733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.757089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.757121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.757513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.757545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.757892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.757926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.758268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.758304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.758667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.758699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.759054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.759086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.759428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.759462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.759733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.759769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.760013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.760045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.760442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.760474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.760827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.760862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.761224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.761255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.761696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.761727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.762074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.762107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.762363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.762395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.762637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.762671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.762967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.762999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.763351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.763383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.763711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.763743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.763998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.764029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.764411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.764444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.729 qpair failed and we were unable to recover it. 00:33:35.729 [2024-11-20 06:43:55.764799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.729 [2024-11-20 06:43:55.764831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.765202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.765237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.765516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.765547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.765890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.765922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.766314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.766349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.766728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.766760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.767017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.767048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.767390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.767422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.767837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.767869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.768208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.768239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.768624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.768656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.769014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.769045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.769313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.769353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.769740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.769771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.770120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.770153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.770435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.770466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.770817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.770848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.771057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.771088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.771496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.771529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.771881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.771911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.772156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.772203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.772553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.772583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.772954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.772987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.773403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.773436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.773814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.773846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.774200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.774232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.774604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.774636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.775050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.775081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.775383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.775415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.775791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.775823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.776182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.776214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.776643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.776675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.777123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.777153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.777507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.777539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.777799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.777830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.778194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.778227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.778659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.778690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.779044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.730 [2024-11-20 06:43:55.779075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.730 qpair failed and we were unable to recover it. 00:33:35.730 [2024-11-20 06:43:55.779429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.779460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.779815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.779846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.780117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.780148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.780622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.780654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.780904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.780934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.781230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.781263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.781663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.781694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.782053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.782082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.782537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.782570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.782954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.782985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.783243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.783275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.783521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.783551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.783902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.783933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.784240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.784272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.784659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.784697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.784933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.784967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.785349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.785383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.785750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.785782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.786136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.786191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.786621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.786653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.787021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.787053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.787438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.787471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.787898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.787929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.788215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.788245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.788629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.788660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.788893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.788923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.789291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.789322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.789691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.789723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.790065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.790097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.790462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.790496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.790884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.790915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.791154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.791201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.791621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.791653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.791940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.791971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.792366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.792398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.792638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.792669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.793079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.793109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.793546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.731 [2024-11-20 06:43:55.793579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.731 qpair failed and we were unable to recover it. 00:33:35.731 [2024-11-20 06:43:55.793936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.793968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.794393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.794424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.794780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.794811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.795197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.795230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.795505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.795535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.795883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.795913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.796155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.796197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.796621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.796652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.797021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.797052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.797400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.797431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.797796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.797826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.798191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.798224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.798512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.798543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.798909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.798942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.799328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.799359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.799725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.799758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.800005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.800041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.800432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.800465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.800865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.800897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.801145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.801185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.801605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.801635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.801998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.802030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.802467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.802499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.802833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.802862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.803216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.803248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.803608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.803640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.804018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.804048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.804452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.804483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.804844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.804878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.805222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.805254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.805639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.805670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.806074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.806105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.806471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.806503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.806735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.806771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.807118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.807149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.807536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.807571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.807946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.807977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.732 [2024-11-20 06:43:55.808291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.732 [2024-11-20 06:43:55.808322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.732 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.808709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.808741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.809147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.809188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.809614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.809646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.810002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.810036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.810476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.810508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.810874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.810906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.811253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.811286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.811673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.811704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.812067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.812098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.812219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.812253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.812595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.812625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.812993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.813026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.813431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.813463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.813766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.813796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.814179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.814210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.814592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.814624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.815007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.815038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.815562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.815594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.815851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.815880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.816242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.816273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.816486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.816517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.816888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.816918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.817263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.817297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.817674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.817705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.818070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.818101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.818454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.818485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.818838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.818870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.819115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.819145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.819554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.819585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.819942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.819974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.820377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.820409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.820777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.820809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.821172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.821205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.821479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.821511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.821765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.821794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.822022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.822052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.822422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.733 [2024-11-20 06:43:55.822455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.733 qpair failed and we were unable to recover it. 00:33:35.733 [2024-11-20 06:43:55.822817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.822849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.823110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.823141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.823507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.823539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.823901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.823933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.824332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.824364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.824633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.824664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.824932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.824962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.825301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.825332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.825669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.825705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.826058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.826089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.826436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.826468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.826729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.826760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.827110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.827141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.827396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.827431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.827809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.827843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.828218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.828250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.828613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.828645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.828999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.829029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.829302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.829333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.829683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.829714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.829962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.829992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.830342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.830373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.830746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.830777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.831153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.831193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.831569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.831601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.831957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.831988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.832430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.832462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.832811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.832842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.833222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.833256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.833636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.833667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.834031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.834064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.834367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.734 [2024-11-20 06:43:55.834398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.734 qpair failed and we were unable to recover it. 00:33:35.734 [2024-11-20 06:43:55.834645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.834677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.835052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.835083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.835376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.835407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.835766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.835798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.836174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.836207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.836576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.836606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.836949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.836980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.837366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.837400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.837758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.837789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.838050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.838080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.838428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.838461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.838811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.838843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.839255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.839287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.839651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.839684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.839963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.839995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.840241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.840275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.840655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.840693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.840946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.840976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.841339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.841371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.841662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.841693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.842039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.842069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.842458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.842490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.842845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.842877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.843153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.843194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.843555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.843586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.843968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.843999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.844257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.844288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.844707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.844738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.845095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.845126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.845580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.845613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.845982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.846014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.846308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.846339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.846690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.846723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.846955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.846989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.847346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.847378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.847653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.847684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.848028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.848060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.735 [2024-11-20 06:43:55.848410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.735 [2024-11-20 06:43:55.848443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.735 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.848791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.848822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.849191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.849224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.849594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.849625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.849958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.849990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.850334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.850366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.850722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.850753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.851182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.851214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.851632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.851664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.852017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.852048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.852322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.852353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.852714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.852745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.853093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.853123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.853410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.853442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.853731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.853761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.854127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.854169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.854543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.854575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.854941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.854973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.855329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.855361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.855722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.855758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.856104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.856136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.856500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.856531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.856778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.856808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.857191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.857224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.857603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.857635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.857894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.857924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.858167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.858203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.858591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.858622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.858994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.859026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.859375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.859409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.859747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.859777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.860177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.860209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.860590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.860621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.860982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.861015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.861356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.861389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.861775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.861806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.862151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.862190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.862425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.862459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.736 [2024-11-20 06:43:55.862806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.736 [2024-11-20 06:43:55.862837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.736 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.863209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.863244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.863635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.863666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.864045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.864076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.864381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.864412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.864776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.864807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.865050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.865081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.865446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.865478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.865817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.865850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.866214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.866248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.866520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.866551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.866912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.866943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.867246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.867277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.867538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.867569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.867926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.867958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.868302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.868333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.868703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.868735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.868977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.869009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.869384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.869415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.869779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.869810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.870184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.870215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.870661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.870699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.871139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.871207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.871583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.871615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.871810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.871842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.871995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.872029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.872371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.872404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.872773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.872805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.873185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.873217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.873579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.873609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.873975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.874006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.874379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.874410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.874762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.874793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.875169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.875200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.875606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.875638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.876040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.876078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.876419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.876452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.876798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.737 [2024-11-20 06:43:55.876828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.737 qpair failed and we were unable to recover it. 00:33:35.737 [2024-11-20 06:43:55.877211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.877243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.877598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.877632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.878029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.878059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.878319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.878351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.878703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.878733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.879095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.879125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.879498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.879530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.879885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.879915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.880191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.880224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.880606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.880637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.880996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.881027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.881469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.881500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.881849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.881880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.882194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.882225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.882607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.882638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.883038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.883068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.883431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.883464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.883823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.883854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.884226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.884258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.884664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.884695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.885051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.885083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.885429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.885460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.885823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.885855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.886240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.886278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.886654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.886686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.887097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.887128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.887548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.887579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.887942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.887975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.888243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.888275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.888639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.888672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.889025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.889056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.889461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.738 [2024-11-20 06:43:55.889492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.738 qpair failed and we were unable to recover it. 00:33:35.738 [2024-11-20 06:43:55.889865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.889897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.890128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.890166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.890547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.890577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.890926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.890957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.891314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.891346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.891796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.891827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.892078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.892108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.892427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.892459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.892831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.892865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.893232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.893265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.893638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.893669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.894023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.894055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.894416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.894447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.894698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.894729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.895081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.895113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.895488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.895521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.895874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.895906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.896369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.896400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.896784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.896815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.897188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.897238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.897605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.897638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.898001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.898031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.898477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.898509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.898857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.898891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.899243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.899278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.899563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.899593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.899836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.899867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.900229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.900259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.900641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.900672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.901052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.901083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.901442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.901474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.901831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.901868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.902237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.902271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.902627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.902657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.903019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.903049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.903416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.903449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.903800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.903832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.739 [2024-11-20 06:43:55.904191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.739 [2024-11-20 06:43:55.904222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.739 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.904588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.904619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.904980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.905010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.905360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.905394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.905744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.905774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.906130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.906169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.906436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.906469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.906820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.906850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.907091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.907125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.907545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.907577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.907828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.907859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.908203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.908235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.908506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.908536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.908887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.908917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.909371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.909405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.909600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.909633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.909978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.910008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.910392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.910425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.910784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.910816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.911179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.911210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.911584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.911615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.911976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.912008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.912394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.912426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.912785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.912818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.913182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.913215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.913577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.913608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.914031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.914061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.914412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.914445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.914812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.914842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.915207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.915240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.915659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.915690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.916047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.916079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.916442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.916476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.916848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.916879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.917245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.917284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.917635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.917669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.918019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.918049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.918416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.740 [2024-11-20 06:43:55.918449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.740 qpair failed and we were unable to recover it. 00:33:35.740 [2024-11-20 06:43:55.918801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.918833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.919091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.919122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.919517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.919549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.919909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.919942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.920182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.920214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.920593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.920624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.920982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.921012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.921349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.921380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.921742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.921773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.922128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.922169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.922551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.922583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.922947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.922978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.923347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.923381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.923732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.923763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.924116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.924149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.924511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.924542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.924898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.924929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.925298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.925329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.925692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.925723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.926080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.926111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.926465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.926497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.926856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.926889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.927125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.927154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.927576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.927608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.927964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.927996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.928417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.928449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.928804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.928837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.929214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.929247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.929618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.929649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.930009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.930040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.930399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.930431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.930792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.930823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.931175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.931207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.931560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.931592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.931945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.931977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.932206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.932240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.932596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.932633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.932867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.741 [2024-11-20 06:43:55.932901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.741 qpair failed and we were unable to recover it. 00:33:35.741 [2024-11-20 06:43:55.933236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.933268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.933635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.933667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.934019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.934051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.934419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.934451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.934812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.934843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.935195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.935229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.935649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.935680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.936035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.936066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.936316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.936348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.936728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.936761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.937110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.937143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.937484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.937516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.937903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.937935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.938307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.938340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.938686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.938717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.939079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.939111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.939501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.939533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.939889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.939920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.940349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.940381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.940731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.940763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.941120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.941150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.941524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.941555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.941920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.941950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.942308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.942339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.942738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.942769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.943116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.943147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.943525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.943557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.943904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.943934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.944297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.944329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.944577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.944610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.944950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.944980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.945332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.945366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.945726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.945756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.946102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.946134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.946510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.946541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.946892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.946925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.947284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.947315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.742 [2024-11-20 06:43:55.947683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.742 [2024-11-20 06:43:55.947716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.742 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.948075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.948114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.948576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.948608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.948962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.948993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.949340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.949375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.949717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.949748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.949988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.950018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.950397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.950429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.950793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.950825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.951182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.951216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.951578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.951609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.951968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.951999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.952341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.952372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.952731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.952762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.953117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.953149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.953535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.953567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.953919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.953951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.954304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.954336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.954758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.954790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.955136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.955183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.955536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.955568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.955916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.955947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.956187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.956218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.956624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.956655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.957007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.957039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.957407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.957440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.957791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.957822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.958181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.958216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.958570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.958601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.958959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.958990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.959343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.959376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.959776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.959807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.960170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.960203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.743 [2024-11-20 06:43:55.960556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.743 [2024-11-20 06:43:55.960588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.743 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.960947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.960978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.961338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.961372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.961734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.961766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.962115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.962146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.962423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.962454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.962806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.962838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.963200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.963233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.963613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.963650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.964001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.964033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.964279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.964315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.964683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.964713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.965067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.965099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.965456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.965488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.965840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.965869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.966233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.966265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.966628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.966660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.967013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.967043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.967416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.967449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.967787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.967819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.968172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.968203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.968545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.968577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.968931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.968961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.969323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.969356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.969714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.969747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.970100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.970131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.970504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.970535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.970889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.970921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.971279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.971311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.971666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.971697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.972056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.972090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.972447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.972479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.972833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.972864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.973220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.973253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.973648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.973678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.974061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.974093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.974462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.974496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.974857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.744 [2024-11-20 06:43:55.974887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.744 qpair failed and we were unable to recover it. 00:33:35.744 [2024-11-20 06:43:55.975245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.975278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.975677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.975709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.976058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.976090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.976511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.976545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.976904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.976935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.977303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.977334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.977683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.977714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.978072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.978104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.978481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.978514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.978862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.978895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.979250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.979289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.979647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.979679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.980038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.980069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.980423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.980455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.980803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.980836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.981183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.981215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.981570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.981600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.981956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.981987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.982366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.982399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.982749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.982781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.983138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.983184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.983566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.983598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.983951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.983983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.984422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.984453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.984857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.984889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.985119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.985153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.985521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.985552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.985908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.985939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.986295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.986331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.986677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.986707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.987057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.987089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.987442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.987475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.987832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.987864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.988225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.988257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:35.745 [2024-11-20 06:43:55.988657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.745 [2024-11-20 06:43:55.988688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:35.745 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.989039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.989072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.989426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.989459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.989814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.989847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.990202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.990233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.990609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.990641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.991001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.991032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.991401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.991432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.991786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.991817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.992181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.992214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.992486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.992521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.992860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.992893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.993238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.993271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.993640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.993671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.994040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.994074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.994435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.994466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.994827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.994865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.995219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.995251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.995633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.995663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.019 qpair failed and we were unable to recover it. 00:33:36.019 [2024-11-20 06:43:55.996033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.019 [2024-11-20 06:43:55.996065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:55.996458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:55.996490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:55.996842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:55.996875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:55.997225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:55.997257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:55.997630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:55.997662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:55.998018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:55.998048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:55.998411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:55.998443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:55.998786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:55.998818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:55.999188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:55.999220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:55.999588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:55.999620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:55.999971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.000003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.000373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.000406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.000760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.000791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.001152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.001195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.001553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.001584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.002007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.002039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.002397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.002430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.002779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.002810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.003176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.003209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.003559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.003589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.003948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.003980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.004337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.004369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.004739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.004769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.005128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.005168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.005526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.005558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.005900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.005931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.006187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.006223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.006603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.006635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.006987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.007018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.007395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.007429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.007786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.007817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.008193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.008224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.008580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.008611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.008957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.008990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.009351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.009383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.009741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.009772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.010133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.010175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.020 [2024-11-20 06:43:56.010541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.020 [2024-11-20 06:43:56.010572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.020 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.010917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.010949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.011302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.011336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.011699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.011732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.012082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.012113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.012473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.012506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.012858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.012889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.013241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.013273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.013634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.013665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.014038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.014069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.014429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.014461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.014813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.014844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.015203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.015237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.015660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.015690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.016065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.016098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.016458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.016491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.016842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.016872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.017240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.017272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.017644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.017675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.018017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.018049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.018422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.018454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.018807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.018840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.019075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.019109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.019499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.019533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.019901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.019933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.020295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.020328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.020693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.020726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.021119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.021155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.021518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.021550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.021923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.021953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.022230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.022260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.022609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.022639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.023000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.023032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.023403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.023435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.023794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.023824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.024185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.024216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.024566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.024597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.024946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.024978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.021 qpair failed and we were unable to recover it. 00:33:36.021 [2024-11-20 06:43:56.025355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.021 [2024-11-20 06:43:56.025387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.025744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.025774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.026136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.026175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.026458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.026488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.026869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.026900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.027236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.027268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.027619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.027650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.027988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.028019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.028382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.028415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.028649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.028683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.029030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.029062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.029398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.029431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.029787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.029819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.030180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.030212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.030561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.030591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.030960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.030991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.031336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.031368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.031808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.031840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.032186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.032218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.032618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.032650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.032886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.032915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.033271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.033302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.033669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.033700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.034060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.034089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.034452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.034483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.034843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.034876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.035238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.035271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.035718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.035748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.036045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.036075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.036455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.036493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.036848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.036881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.037235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.037268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.037640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.037671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.038020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.038049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.038272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.038303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.038681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.038714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.039064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.039095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.039452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.039483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.022 qpair failed and we were unable to recover it. 00:33:36.022 [2024-11-20 06:43:56.039848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.022 [2024-11-20 06:43:56.039878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.040264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.040296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.040656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.040689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.041055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.041086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.041483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.041515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.041899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.041931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.042304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.042338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.042705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.042735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.042958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.042988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.043369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.043401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.043762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.043794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.044022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.044052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.044417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.044449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.044795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.044826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.045189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.045220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.045583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.045614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.045982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.046012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.046382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.046416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.046818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.046850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.047203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.047236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.047609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.047640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.048012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.048042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.048412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.048446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.048801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.048832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.049190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.049223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.049570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.049600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.049936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.049965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.050328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.050360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.050789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.050820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.051194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.051230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.051633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.051664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.052004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.052047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.052412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.052444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.052799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.052833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.053188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.053220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.053576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.053605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.054008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.054038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.023 [2024-11-20 06:43:56.054406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.023 [2024-11-20 06:43:56.054440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.023 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.054800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.054830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.055183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.055215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.055567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.055600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.055954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.055986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.056338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.056373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.056743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.056774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.057131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.057168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.057535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.057569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.057915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.057947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.058190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.058221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.058516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.058546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.058979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.059009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.059368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.059400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.059750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.059781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.060138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.060182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.060527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.060558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.060923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.060955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.061314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.061347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.061692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.061725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.062082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.062112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.062462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.062495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.062846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.062876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.063236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.063268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.063543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.063572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.063942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.063972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.064236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.064267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.064632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.064664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.065017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.065048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.065410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.065442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.065803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.065835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.066189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.066221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.066632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.066663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.067004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.024 [2024-11-20 06:43:56.067035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.024 qpair failed and we were unable to recover it. 00:33:36.024 [2024-11-20 06:43:56.067412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.067451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.067840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.067871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.068221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.068255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.068627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.068657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.069036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.069068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.069424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.069458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.069701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.069731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.070092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.070124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.070479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.070512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.070867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.070897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.071256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.071288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.071646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.071677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.072038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.072068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.072429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.072461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.072812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.072847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.073197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.073228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.073559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.073592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.073947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.073979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.074326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.074358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.074723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.074754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.075109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.075141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.075507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.075540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.075898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.075928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.076303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.076336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.076689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.076720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.077080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.077112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.077511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.077544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.077944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.077976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.078330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.078363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.078715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.078747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.079102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.079133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.079530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.079562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.079917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.079947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.080315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.080348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.080699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.080730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.081084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.081115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.081474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.081506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.081861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.025 [2024-11-20 06:43:56.081893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.025 qpair failed and we were unable to recover it. 00:33:36.025 [2024-11-20 06:43:56.082254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.082285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.082639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.082672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.083017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.083055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.083408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.083440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.083789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.083820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.084061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.084097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.084527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.084559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.084976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.085007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.085465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.085498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.085839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.085871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.086233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.086265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.086627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.086659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.087012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.087042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.087419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.087452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.087811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.087842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.088087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.088121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.088521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.088555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.088913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.088944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.089301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.089333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.089682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.089713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.090070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.090103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.090538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.090569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.090924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.090955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.091319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.091353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.091698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.091729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.092134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.092178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.092489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.092521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.092877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.092908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.093276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.093307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.093663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.093695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.094054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.094085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.094485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.094516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.094750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.094784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.095137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.095178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.095537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.095570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.095973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.096004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.096343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.096375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.096740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.026 [2024-11-20 06:43:56.096772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.026 qpair failed and we were unable to recover it. 00:33:36.026 [2024-11-20 06:43:56.097119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.097152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.097513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.097544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.097892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.097924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.098174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.098208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.098620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.098657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.099002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.099034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.099412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.099444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.099827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.099858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.100216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.100249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.100497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.100527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.100907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.100937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.101316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.101349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.101700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.101733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.102086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.102117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.102462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.102493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.102925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.102956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.103311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.103344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.103691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.103722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.104080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.104112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.104539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.104572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.104916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.104949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.105342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.105374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.105736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.105768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.106123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.106154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.106511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.106542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.106743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.106777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.107165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.107198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.107594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.107625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.107985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.108016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.108385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.108419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.108781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.108813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.109181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.109215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.109542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.109574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.109936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.109967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.110326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.110360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.110731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.110763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.111116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.111147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.027 [2024-11-20 06:43:56.111499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.027 [2024-11-20 06:43:56.111530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.027 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.111883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.111915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.112283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.112313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.112666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.112698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.113053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.113084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.113430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.113462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.113818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.113848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.114203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.114243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.114590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.114621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.114982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.115012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.115389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.115423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.115766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.115797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.116152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.116191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.116540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.116572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.116929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.116959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.117321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.117353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.117715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.117747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.118187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.118219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.118583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.118614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.118966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.118998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.119343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.119377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.119728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.119759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.120116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.120147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.120524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.120557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.120887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.120917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.121274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.121306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.121670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.121703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.122060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.122092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.122453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.122486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.122834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.122866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.123227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.123258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.123628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.123659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.124074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.124105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.124359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.028 [2024-11-20 06:43:56.124391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.028 qpair failed and we were unable to recover it. 00:33:36.028 [2024-11-20 06:43:56.124771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.124802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.125165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.125199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.125546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.125577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.125791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.125823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.126180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.126212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.126575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.126606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.126841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.126874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.127229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.127261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.127629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.127661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.128015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.128046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.128408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.128442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.128799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.128829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.129191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.129222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.129572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.129615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.129903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.129935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.130299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.130332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.130701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.130732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.131086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.131118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.131480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.131511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.131856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.131887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.132128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.132181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.132560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.132591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.132948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.132978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.133344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.133376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.133737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.133768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.134010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.134040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.134412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.134443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.134829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.134860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.135225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.135257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.135493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.135527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.135883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.135914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.136273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.136307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.136676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.136706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.137065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.137096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.137465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.137499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.137844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.137875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.138239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.138271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.138626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.029 [2024-11-20 06:43:56.138659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.029 qpair failed and we were unable to recover it. 00:33:36.029 [2024-11-20 06:43:56.139084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.139114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.139467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.139498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.139849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.139880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.140255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.140287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.140638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.140670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.141030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.141061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.141415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.141447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.141806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.141836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.142193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.142226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.142617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.142646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.143002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.143033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.143421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.143453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.143808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.143838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.144188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.144220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.144567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.144598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.144949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.144986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.145383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.145416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.145779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.145811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.146179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.146211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.146568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.146598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.146959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.146991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.147339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.147370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.147735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.147765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.148125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.148156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.148548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.148578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.148924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.148955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.149311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.149344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.149703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.149732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.150087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.150118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.150479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.150513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.150751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.150786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.151174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.151207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.151608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.151640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.151981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.152013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.152334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.152365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.152715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.152747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.153101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.153130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.153507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.030 [2024-11-20 06:43:56.153541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.030 qpair failed and we were unable to recover it. 00:33:36.030 [2024-11-20 06:43:56.153892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.153922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.154285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.154316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.154677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.154709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.155066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.155096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.155489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.155520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.155879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.155911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.156269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.156302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.156544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.156579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.156920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.156951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.157378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.157410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.157758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.157789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.158140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.158180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.158487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.158517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.158875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.158906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.159263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.159296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.159644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.159676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.160051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.160082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.160436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.160475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.160822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.160853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.161205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.161237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.161591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.161620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.161972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.162004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.162259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.162291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.162640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.162670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.163029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.163060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.163420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.163454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.163748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.163779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.164130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.164168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.164508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.164539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.164880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.164911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.165284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.165316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.165670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.165703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.166025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.166056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.166412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.166444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.166792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.166824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.167181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.167214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.167571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.167601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.168005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.168034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.031 qpair failed and we were unable to recover it. 00:33:36.031 [2024-11-20 06:43:56.168404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.031 [2024-11-20 06:43:56.168435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.168788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.168820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.169184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.169216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.169560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.169590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.169831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.169865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.170218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.170250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.170642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.170673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.171022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.171053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.171415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.171448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.171801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.171832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.172190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.172222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.172582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.172612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.172969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.172999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.173349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.173383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.173812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.173842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.174192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.174224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.174560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.174591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.174945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.174975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.175335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.175367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.175795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.175832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.176178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.176211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.176574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.176605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.176959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.176990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.177340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.177372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.177625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.177658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.178039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.178069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.178420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.178452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.178807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.178838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.179198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.179230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.179592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.179623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.180049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.180080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.180432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.180464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.180827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.180858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.181215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.181247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.181594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.181623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.181987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.182018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.182382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.182413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.182679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.182709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.032 qpair failed and we were unable to recover it. 00:33:36.032 [2024-11-20 06:43:56.183068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.032 [2024-11-20 06:43:56.183098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.183336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.183372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.183787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.183819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.184177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.184211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.184609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.184640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.184987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.185017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.185382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.185416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.185804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.185836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.186203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.186235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.186590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.186620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.186978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.187008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.187422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.187454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.187809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.187839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.188197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.188229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.188585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.188615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.188964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.188995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.189377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.189410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.189792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.189821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.190171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.190203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.190457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.190489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.190866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.190895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.191254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.191292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.191643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.191675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.192026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.192057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.192422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.192453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.192824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.192856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.193204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.193234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.193598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.193630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.194015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.194048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.194411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.194443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.194807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.194838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.195193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.195227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.195621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.195654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.196004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.196036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.033 qpair failed and we were unable to recover it. 00:33:36.033 [2024-11-20 06:43:56.196406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.033 [2024-11-20 06:43:56.196439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.196787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.196818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.197191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.197225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.197653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.197684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.197933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.197964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.198310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.198341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.198705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.198735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.199091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.199121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.199555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.199587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.199952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.199983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.200333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.200365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.200726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.200758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.201111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.201142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.201522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.201554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.201925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.201958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.202311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.202344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.202710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.202741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.202979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.203014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.203364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.203395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.203785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.203816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.204179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.204212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.204561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.204593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.204959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.204990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.205265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.205297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.205642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.205672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.206031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.206062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.206312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.206343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.206687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.206729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.207074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.207105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.207462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.207497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.207854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.207885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.208240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.208272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.208670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.208701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.209105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.209135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.209539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.209572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.209913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.209945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.210187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.210219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.210570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.034 [2024-11-20 06:43:56.210601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.034 qpair failed and we were unable to recover it. 00:33:36.034 [2024-11-20 06:43:56.210956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.210989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.211344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.211374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.211727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.211758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.212122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.212154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.212494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.212526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.212873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.212904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.213152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.213198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.213580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.213611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.214004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.214034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.214394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.214428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.214785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.214816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.215184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.215216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.215589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.215618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.216012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.216043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.216417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.216448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.216805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.216836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.217194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.217231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.217501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.217534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.217884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.217915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.218289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.218321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.218673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.218705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.219062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.219093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.219448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.219480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.219842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.219874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.220232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.220264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.220636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.220668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.220904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.220934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.221300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.221333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.221698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.221730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.222090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.222122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.222520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.222551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.222904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.222936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.223293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.223326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.223688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.223718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.223851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.223884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.224260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.224292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.224667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.224699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.225065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.035 [2024-11-20 06:43:56.225097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.035 qpair failed and we were unable to recover it. 00:33:36.035 [2024-11-20 06:43:56.225493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.225525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.225874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.225906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.226269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.226302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.226654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.226685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.227040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.227071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.227431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.227464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.227795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.227825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.228177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.228209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.228608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.228638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.228992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.229024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.229277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.229309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.229608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.229638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.229986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.230017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.230387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.230419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.230816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.230848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.231203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.231235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.231489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.231520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.231907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.231937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.232293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.232332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.232690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.232722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.233077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.233108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.233515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.233547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.233908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.233938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.234305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.234339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.234569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.234599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.234953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.234985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.235343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.235378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.235811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.235841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.236191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.236223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.236602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.236631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.236986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.237017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.237389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.237422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.237809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.237841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.238183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.238215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.238562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.238594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.238948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.238978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.239336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.239370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.239723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.036 [2024-11-20 06:43:56.239753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.036 qpair failed and we were unable to recover it. 00:33:36.036 [2024-11-20 06:43:56.240113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.240144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.240523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.240555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.240913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.240945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.241374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.241405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.241759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.241792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.242149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.242188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.242534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.242567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.242919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.242951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.243320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.243352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.243700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.243734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.243981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.244012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.244382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.244415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.244763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.244794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.245148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.245186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.245555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.245585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.245951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.245983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.246334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.246365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.246628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.246658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.247001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.247031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.247402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.247435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.247786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.247822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.248182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.248214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.248575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.248606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.248961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.248991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.249369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.249401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.249755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.249785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.250149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.250190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.250538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.250569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.250929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.250959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.251315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.251350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.251749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.251781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.252146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.252184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.252550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.252581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.037 qpair failed and we were unable to recover it. 00:33:36.037 [2024-11-20 06:43:56.252967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.037 [2024-11-20 06:43:56.252998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.253362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.253395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.253611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.253640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.254027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.254059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.254421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.254452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.254695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.254729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.255073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.255104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.255515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.255548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.255902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.255933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.256313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.256344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.256704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.256736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.256972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.257002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.257389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.257420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.257663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.257694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.258053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.258084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.258469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.258501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.258859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.258891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.259242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.259274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.259662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.259693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.260055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.260086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.260443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.260474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.260831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.260861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.261227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.261259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.261621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.261653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.262004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.262035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.262293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.262325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.262672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.262703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.263082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.263118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.263510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.263543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.263917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.263949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.264324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.264357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.264709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.264741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.265090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.265121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.265371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.265402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.038 qpair failed and we were unable to recover it. 00:33:36.038 [2024-11-20 06:43:56.265787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.038 [2024-11-20 06:43:56.265817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.266182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.266216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.266584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.266615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.266978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.267009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.267346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.267378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.267722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.267753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.268129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.268171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.268576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.268608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.268962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.269000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.269358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.269390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.269744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.269775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.270189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.270224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.270588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.270618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.270968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.270999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.271338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.271373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.271722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.271753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.272113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.272147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.272529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.272561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.272917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.272950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.273290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.273322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.273700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.273731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.274095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.274127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.274533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.274565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.274914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.274948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.275323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.275357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.275714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.275745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.276109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.276143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.276420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.276452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.276822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.276853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.277220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.277253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.277632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.277663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.278030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.278061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.278454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.278485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.278915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.278953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.279300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.279331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.279693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.279724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.280078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.280109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.280506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.039 [2024-11-20 06:43:56.280538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.039 qpair failed and we were unable to recover it. 00:33:36.039 [2024-11-20 06:43:56.280897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.040 [2024-11-20 06:43:56.280928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.040 qpair failed and we were unable to recover it. 00:33:36.040 [2024-11-20 06:43:56.281284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.040 [2024-11-20 06:43:56.281317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.040 qpair failed and we were unable to recover it. 00:33:36.040 [2024-11-20 06:43:56.281706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.040 [2024-11-20 06:43:56.281737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.040 qpair failed and we were unable to recover it. 00:33:36.040 [2024-11-20 06:43:56.281968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.040 [2024-11-20 06:43:56.281999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.040 qpair failed and we were unable to recover it. 00:33:36.040 [2024-11-20 06:43:56.282257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.040 [2024-11-20 06:43:56.282290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.040 qpair failed and we were unable to recover it. 00:33:36.040 [2024-11-20 06:43:56.282656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.040 [2024-11-20 06:43:56.282689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.040 qpair failed and we were unable to recover it. 00:33:36.040 [2024-11-20 06:43:56.283036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.040 [2024-11-20 06:43:56.283069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.040 qpair failed and we were unable to recover it. 00:33:36.040 [2024-11-20 06:43:56.283447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.040 [2024-11-20 06:43:56.283480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.040 qpair failed and we were unable to recover it. 00:33:36.040 [2024-11-20 06:43:56.283838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.040 [2024-11-20 06:43:56.283869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.040 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.284259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.284293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.284656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.284692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.285036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.285067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.285446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.285478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.285841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.285872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.286225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.286256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.286627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.286657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.287013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.287046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.287452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.287484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.287839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.287870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.288112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.288146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.288583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.288615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.288984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.289017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.289384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.289419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.289779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.289810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.290177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.290210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.290603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.290635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.291045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.291075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.291436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.291470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.291825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.291856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.292097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.292130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.292529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.292560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.292925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.292959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.293315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.293349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.293723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.293755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.294125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.313 [2024-11-20 06:43:56.294157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.313 qpair failed and we were unable to recover it. 00:33:36.313 [2024-11-20 06:43:56.294523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.294560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.294924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.294955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.295337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.295369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.295728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.295758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.296124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.296157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.296540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.296572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.297000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.297031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.297404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.297441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.297721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.297752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.298102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.298133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.298472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.298503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.298893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.298923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.299185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.299215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.299547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.299578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.299968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.299999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.300382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.300413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.300774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.300808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.301155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.301201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.301564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.301595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.301963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.301995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.302359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.302391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.302742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.302775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.303125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.303156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.303448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.303479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.303825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.303857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.304219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.304253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.304596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.304630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.304993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.305023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.305390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.305421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.305676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.305705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.306086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.306115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.306479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.306513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.306902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.306933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.307287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.307318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.307687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.307718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.308101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.308133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.308495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.308529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.314 [2024-11-20 06:43:56.308886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.314 [2024-11-20 06:43:56.308917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.314 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.309286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.309318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.309579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.309609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.309959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.309996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.310336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.310369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.310736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.310767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.311186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.311219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.311594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.311626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.311984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.312016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.312256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.312293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.312562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.312593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.312957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.312988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.313373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.313411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.313771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.313805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.314174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.314207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.314602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.314635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.314974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.315006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.315366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.315401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.315752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.315783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.316194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.316227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.316493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.316525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.316891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.316922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.317284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.317317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.317593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.317624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.317975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.318007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.318374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.318407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.318758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.318789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.319172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.319207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.319578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.319611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.319964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.319996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.320347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.320381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.320719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.320751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.321103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.321134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.321514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.321546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.321903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.321935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.322291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.322324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.322687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.315 [2024-11-20 06:43:56.322719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.315 qpair failed and we were unable to recover it. 00:33:36.315 [2024-11-20 06:43:56.323077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.323108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.323477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.323510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.323863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.323894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.324184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.324217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.324478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.324513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.324895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.324927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.325295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.325335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.325692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.325723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.326109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.326142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.326510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.326543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.326912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.326943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.327213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.327245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.327636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.327667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.328069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.328099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.328470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.328506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.328852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.328885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.329291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.329325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.329683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.329714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.330071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.330103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.330506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.330539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.330900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.330932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.331290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.331323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.331670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.331702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.332069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.332101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.332451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.332483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.332842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.332874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.333238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.333272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.333645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.333677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.334018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.334049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.334425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.334457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.334813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.334844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.335196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.335229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.335580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.335613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.335970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.336003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.336408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.336442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.336785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.336818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.337179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.337211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.316 [2024-11-20 06:43:56.337566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.316 [2024-11-20 06:43:56.337598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.316 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.337976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.338008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.338236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.338273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.338671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.338703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.339047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.339080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.339443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.339476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.339839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.339870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.340232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.340264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.340650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.340681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.341049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.341087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.341464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.341497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.341855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.341886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.342236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.342269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.342627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.342660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.343017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.343048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.343410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.343443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.343790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.343823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.344178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.344210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.344593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.344624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.344979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.345012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.345377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.345410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.345763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.345795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.346146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.346188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.346593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.346625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.346982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.347014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.347273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.347309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.347694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.347726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.348077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.348108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.348501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.348533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.348894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.348925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.349278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.349311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.349688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.349721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.317 [2024-11-20 06:43:56.350075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.317 [2024-11-20 06:43:56.350106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.317 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.350514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.350548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.350905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.350937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.351307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.351340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.351696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.351728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.352081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.352113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.352466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.352500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.352856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.352887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.353234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.353268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.353665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.353699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.354050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.354082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.354418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.354451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.354803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.354835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.355191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.355225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.355588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.355620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.355983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.356015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.356391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.356423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.356771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.356810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.357166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.357199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.357576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.357609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.357970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.358001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.358364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.358397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.358788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.358820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.359175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.359208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.359475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.359507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.359883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.359916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.360272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.360306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.360660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.360691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.361051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.361082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.361447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.361479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.361876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.361908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.362271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.362304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.362661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.362694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.363052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.363083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.363442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.363475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.363834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.363866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.364206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.364239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.364594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.318 [2024-11-20 06:43:56.364626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.318 qpair failed and we were unable to recover it. 00:33:36.318 [2024-11-20 06:43:56.364982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.365015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.365385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.365417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.365771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.365803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.366167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.366199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.366559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.366590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.366973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.367005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.367379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.367413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.367774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.367806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.368177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.368210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.368564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.368596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.368949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.368981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.369340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.369373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.369734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.369766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.370195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.370228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.370578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.370609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.370968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.371000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.371363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.371396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.371754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.371787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.372153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.372194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.372557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.372595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.372926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.372958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.373338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.373371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.373796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.373828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.374184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.374217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.374571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.374603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.375002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.375034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.375454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.375488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.375847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.375878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.376233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.376265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.376631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.376663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.377010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.377041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.377414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.377447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.377812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.377844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.378185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.378219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.378569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.378600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.378957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.378989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.379245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.379277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.319 [2024-11-20 06:43:56.379700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.319 [2024-11-20 06:43:56.379732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.319 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.380078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.380109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.380484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.380517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.380875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.380907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.381282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.381314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.381665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.381698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.382061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.382092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.382448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.382480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.382837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.382868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.383226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.383259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.383653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.383684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.384029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.384061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.384403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.384435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.384768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.384799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.385154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.385199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.385545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.385576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.385933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.385965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.386320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.386352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.386699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.386731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.387170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.387203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.387610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.387642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.388008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.388041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.388399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.388793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.388826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.389185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.389218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.389571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.389602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.389954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.389985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.390344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.390377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.390775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.390806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.391154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.391194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.391547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.391578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.391924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.391956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.392317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.392351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.392712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.392743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.393108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.393140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.393448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.393480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.393840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.393871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.394224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.394258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.320 [2024-11-20 06:43:56.394621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.320 [2024-11-20 06:43:56.394652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.320 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.395008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.395041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.395405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.395438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.395680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.395713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.396069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.396100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.396550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.396583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.396975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.397007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.397239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.397275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.397625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.397659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.398013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.398045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.398294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.398330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.398686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.398719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.399058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.399090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.399454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.399487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.399846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.399879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.400242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.400276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.400639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.400671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.401023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.401055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.401304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.401341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.401698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.401730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.402089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.402121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.402479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.402514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.402860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.402893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.403140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.403186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.403529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.403568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.403916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.403948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.404305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.404338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.404705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.404737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.404969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.405004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.405404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.405437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.405799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.405831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.406221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.406253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.406652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.406684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.407051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.407083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.407443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.407475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.407827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.407860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.408218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.408251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.408612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.408644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.321 [2024-11-20 06:43:56.408995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.321 [2024-11-20 06:43:56.409027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.321 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.409400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.409434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.409790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.409822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.410184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.410217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.410618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.410650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.410989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.411021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.411346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.411378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.411730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.411762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.412125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.412156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.412543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.412575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.412932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.412965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.413326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.413360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.413713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.413745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.414105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.414143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.414531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.414565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.414920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.414952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.415315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.415348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.415698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.415730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.416093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.416124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.416481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.416515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.416875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.416908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.417289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.417322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.417701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.417732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.418077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.418108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.418464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.418497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.418855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.418889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.419242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.419274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.419639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.419672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.420023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.420055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.420423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.420455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.420889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.420921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.322 qpair failed and we were unable to recover it. 00:33:36.322 [2024-11-20 06:43:56.421297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.322 [2024-11-20 06:43:56.421329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.421695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.421726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.422078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.422110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.422475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.422508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.422934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.422965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.423325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.423361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.423720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.423753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.424123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.424154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.424512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.424544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.424900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.424933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.425274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.425308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.425673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.425705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.426061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.426092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.426464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.426497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.426864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.426897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.427252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.427286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.427645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.427676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.428038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.428071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.428426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.428458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.428802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.428834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.429197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.429231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.429633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.429665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.430013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.430050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.430421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.430454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.430811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.430844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.431206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.431240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.431591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.431622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.431979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.432010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.432390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.432422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.432779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.432811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.433193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.433226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.433603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.433636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.434003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.434035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.434397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.434430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.434778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.434811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.434980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.435015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.435286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.435322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.435709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.435741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.323 qpair failed and we were unable to recover it. 00:33:36.323 [2024-11-20 06:43:56.436086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.323 [2024-11-20 06:43:56.436118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.436507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.436540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.436906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.436938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.437295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.437327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.437687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.437719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.438075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.438105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.438462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.438495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.438851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.438883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.439249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.439281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.439638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.439671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.439900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.439936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.440334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.440367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.440749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.440781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.441138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.441188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.441539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.441571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.441925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.441957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.442316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.442347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.442710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.442741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.443099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.443131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.443519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.443552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.443887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.443919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.444272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.444304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.444548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.444585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.444943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.444975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.445328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.445366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.445719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.445751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.446092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.446124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.446484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.446517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.446871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.446902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.447264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.447296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.447654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.447685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.448043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.448075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.448433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.448466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.448821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.448853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.449219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.449252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.449600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.449633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.449978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.450010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.450368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.324 [2024-11-20 06:43:56.450401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.324 qpair failed and we were unable to recover it. 00:33:36.324 [2024-11-20 06:43:56.450762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.450794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.451146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.451185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.451538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.451570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.451946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.451979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.452334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.452367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.452722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.452754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.453114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.453146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.453518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.453550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.453906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.453937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.454301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.454333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.454705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.454736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.455100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.455132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.455498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.455530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.455888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.455922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.456262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.456295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.456659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.456690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.457046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.457078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.457441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.457473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.457820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.457851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.458217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.458250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.458629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.458661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.459013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.459044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.459405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.459438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.459796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.459828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.460227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.460259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.460616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.460647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.461005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.461044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.461419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.461452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.461799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.461833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.462176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.462210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.462550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.462582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.462935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.462968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.463292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.463325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.463678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.463710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.464070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.464102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.464465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.464498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.464851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.464883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.465242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.325 [2024-11-20 06:43:56.465275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.325 qpair failed and we were unable to recover it. 00:33:36.325 [2024-11-20 06:43:56.465632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.465665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.466019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.466051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.466416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.466449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.466804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.466836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.467193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.467225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.467631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.467661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.468012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.468043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.468411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.468444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.468799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.468831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.469203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.469237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.469610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.469643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.470000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.470031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.470397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.470429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.470788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.470820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.471214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.471246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.471605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.471638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.472027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.472060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.472415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.472447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.472802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.472835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.473199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.473233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.473583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.473615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.473868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.473904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.474285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.474318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.474668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.474700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.475047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.475078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.475441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.475473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.475825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.475857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.476219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.476253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.476622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.476659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.477009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.477042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.477412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.477446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.477800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.477830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.478196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.478228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.478632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.478664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.479011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.479042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.479410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.479443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.479790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.479822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.480190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.326 [2024-11-20 06:43:56.480222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.326 qpair failed and we were unable to recover it. 00:33:36.326 [2024-11-20 06:43:56.480489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.480525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.480904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.480937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.481287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.481320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.481731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.481762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.482118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.482150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.482524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.482557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.482923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.482955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.483398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.483431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.483760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.483792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.484156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.484201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.484551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.484582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.484938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.484969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.485333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.485365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.485723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.485754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.486117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.486149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.486597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.486628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.486978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.487010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.487374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.487408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.487766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.487797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.488157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.488216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.488582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.488613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.488971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.489003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.489345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.489378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.489733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.489765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.489977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.490012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.490414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.490448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.490796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.490827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.491184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.491217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.491581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.491612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.491977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.492009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.492377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.492417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.492811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.492842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.493191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.493226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.327 qpair failed and we were unable to recover it. 00:33:36.327 [2024-11-20 06:43:56.493633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.327 [2024-11-20 06:43:56.493665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.494010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.494043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.494413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.494447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.494896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.494927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.495180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.495212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.495603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.495634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.496024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.496055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.496415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.496447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.496806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.496840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.497188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.497220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.497580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.497612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.498036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.498068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.498467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.498500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.498855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.498887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.499244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.499276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.499628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.499660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.500018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.500050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3028031 Killed "${NVMF_APP[@]}" "$@" 00:33:36.328 [2024-11-20 06:43:56.500466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.500499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.500856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.500888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:36.328 [2024-11-20 06:43:56.501252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.501284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:36.328 [2024-11-20 06:43:56.501647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.501679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:36.328 [2024-11-20 06:43:56.502029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.502059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.328 [2024-11-20 06:43:56.502456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.502488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.502740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.502770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.503021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.503052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.503397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.503430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.503611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.503641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.503887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.503919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.504277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.504310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.504687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.504719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.505063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.505095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.505489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.505523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.505799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.505831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.506189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.506222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.506557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.328 [2024-11-20 06:43:56.506588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.328 qpair failed and we were unable to recover it. 00:33:36.328 [2024-11-20 06:43:56.506951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.506983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.507354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.507385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.507751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.507782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.508140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.508184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.508554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.508584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.508997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.509027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.509402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.509434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.509838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.509869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.510222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.510253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3029060 00:33:36.329 [2024-11-20 06:43:56.510608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.510640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3029060 00:33:36.329 [2024-11-20 06:43:56.510990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.511022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:36.329 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3029060 ']' 00:33:36.329 [2024-11-20 06:43:56.511389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.511422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.329 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:36.329 [2024-11-20 06:43:56.511778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.511809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.329 [2024-11-20 06:43:56.512035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.512071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:36.329 [2024-11-20 06:43:56.512402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 06:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.329 [2024-11-20 06:43:56.512434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.512783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.512816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.513183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.513221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.513611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.513642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.513883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.513919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.514282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.514320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.514727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.514760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.515123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.515172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.515567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.515599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.515964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.515996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.516348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.516381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.516757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.516789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.517139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.517185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.517561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.517592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.518009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.518039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.518404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.518437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.518796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.518826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.519184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.519216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-11-20 06:43:56.519586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.329 [2024-11-20 06:43:56.519617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.519901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.519931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.520291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.520324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.520688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.520720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.521078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.521111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.521397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.521428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.521795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.521826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.522078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.522108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.522571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.522604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.522961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.522995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.523341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.523373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.523629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.523658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.524022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.524053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.524306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.524339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.524688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.524721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.525089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.525119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.525296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.525339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.525720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.525753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.526192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.526225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.526596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.526628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.526990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.527021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.527411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.527443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.527807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.527839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.528201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.528232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.528593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.528623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.528983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.529015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.529255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.529291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.529657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.529690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.529913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.529944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.530246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.530278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.530545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.530577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.530842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.530875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.531239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.531272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.531652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.531686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.531947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.531982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.532340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.532372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.532642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.532674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-11-20 06:43:56.533025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.330 [2024-11-20 06:43:56.533058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.533315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.533348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.533723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.533755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.534109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.534141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.534540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.534572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.534936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.534969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.535328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.535362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.535712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.535744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.536094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.536126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.536585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.536618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.536969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.537001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.537280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.537314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.537701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.537733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.538091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.538122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.538519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.538555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.538914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.538945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.539309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.539343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.539698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.539729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.540089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.540119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.540521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.540559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.540913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.540945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.541301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.541334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.541689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.541718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.542082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.542114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.542508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.542543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.542902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.542932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.543292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.543325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.543739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.543769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.544145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.544189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.544567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.544599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.545038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.545069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.545405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.331 [2024-11-20 06:43:56.545438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-11-20 06:43:56.545798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.545831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.546196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.546229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.546626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.546657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.546991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.547022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.547392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.547427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.547787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.547818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.548179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.548214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.548606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.548636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.548892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.548923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.549277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.549310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.549679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.549711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.550152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.550197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.550571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.550601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.550956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.550988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.551360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.551394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.551753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.551785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.552148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.552194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.552550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.552582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.552955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.552989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.553351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.553382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.553794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.553825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.554179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.554211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.554485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.554519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.554868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.554900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.555262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.555294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.555664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.555696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.556060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.556093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.556434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.556474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.556812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.556845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.557195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.557227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.557600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.557632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.557910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.557940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.558306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.558340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.558766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.558799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.559178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.559211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.559568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.559598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.559974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.332 [2024-11-20 06:43:56.560005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.332 qpair failed and we were unable to recover it. 00:33:36.332 [2024-11-20 06:43:56.560385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.560417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.560779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.560810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.561181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.561214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.561579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.561611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.561968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.562000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.562264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.562298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.562679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.562710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.563082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.563113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.563481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.563516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.563878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.563910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.564290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.564326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.564671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.564702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.565075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.565106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.565478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.565510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.565879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.565909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.566276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.566309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.566688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.566720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.567093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.567126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.567499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.567530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.567778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.567809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.568184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.568217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 [2024-11-20 06:43:56.568220] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.568282] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.333 [2024-11-20 06:43:56.568641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.568672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.569044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.569073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.569442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.569473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.569918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.569950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.570304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.570335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.570685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.570718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.571072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.571104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.571549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.571581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.571852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.571885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.572259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.572294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.572551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.572582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.572841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.572874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.573099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.573131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.573457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.573489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.573851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.333 [2024-11-20 06:43:56.573884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.333 qpair failed and we were unable to recover it. 00:33:36.333 [2024-11-20 06:43:56.574244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.334 [2024-11-20 06:43:56.574276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.334 qpair failed and we were unable to recover it. 00:33:36.334 [2024-11-20 06:43:56.574710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.334 [2024-11-20 06:43:56.574742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.334 qpair failed and we were unable to recover it. 00:33:36.334 [2024-11-20 06:43:56.575087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.334 [2024-11-20 06:43:56.575118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.334 qpair failed and we were unable to recover it. 00:33:36.334 [2024-11-20 06:43:56.575467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.334 [2024-11-20 06:43:56.575500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.334 qpair failed and we were unable to recover it. 00:33:36.334 [2024-11-20 06:43:56.575948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.334 [2024-11-20 06:43:56.575981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.334 qpair failed and we were unable to recover it. 00:33:36.334 [2024-11-20 06:43:56.576351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.334 [2024-11-20 06:43:56.576384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.334 qpair failed and we were unable to recover it. 00:33:36.334 [2024-11-20 06:43:56.576741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.334 [2024-11-20 06:43:56.576773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.334 qpair failed and we were unable to recover it. 00:33:36.334 [2024-11-20 06:43:56.577132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.334 [2024-11-20 06:43:56.577175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.334 qpair failed and we were unable to recover it. 00:33:36.334 [2024-11-20 06:43:56.577523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.334 [2024-11-20 06:43:56.577558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.334 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.577915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.577950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.578317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.578350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.578702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.578734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.579092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.579124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.579520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.579552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.579925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.579958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.580310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.580345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.580707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.580738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.581093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.581125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.581525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.581558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.581916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.581948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.608 qpair failed and we were unable to recover it. 00:33:36.608 [2024-11-20 06:43:56.582307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.608 [2024-11-20 06:43:56.582340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.582706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.582739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.583090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.583123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.583505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.583538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.583778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.583814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.584201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.584234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.584486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.584518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.584883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.584915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.585280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.585312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.585669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.585702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.586052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.586084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.586448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.586481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.586832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.586862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.587234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.587273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.587643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.587676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.588041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.588073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.588432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.588465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.588822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.588853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.589216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.589248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.589610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.589641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.589871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.589901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.590258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.590290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.590648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.590678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.591040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.591073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.591440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.591471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.591829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.591861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.592226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.592261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.592640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.592673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.593028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.593060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.593413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.609 [2024-11-20 06:43:56.593445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.609 qpair failed and we were unable to recover it. 00:33:36.609 [2024-11-20 06:43:56.593802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.593834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.594082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.594111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.594502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.594534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.594940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.594971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.595322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.595355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.595713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.595745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.596102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.596132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.596547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.596581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.596925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.596955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.597327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.597359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.597765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.597798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.598149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.598192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.598544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.598576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.598947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.598978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.599219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.599252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.599638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.599669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.600030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.600063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.600441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.600474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.600826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.600858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.601104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.601139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.601531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.601563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.601929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.601961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.602325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.602358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.602719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.602765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.603113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.603145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.603528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.603561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.603914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.603944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.604309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.604343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.604702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.604734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.605095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.610 [2024-11-20 06:43:56.605127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.610 qpair failed and we were unable to recover it. 00:33:36.610 [2024-11-20 06:43:56.605493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.605527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.605886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.605918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.606268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.606300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.606548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.606579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.606938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.606969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.607332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.607365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.607724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.607754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.608191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.608225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.608586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.608620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.608890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.608921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.609272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.609306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.609725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.609756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.610172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.610206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.610582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.610614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.610985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.611016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.611405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.611438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.611787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.611817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.612207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.612240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.612620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.612651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.613016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.613046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.613419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.613453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.613816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.613848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.614207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.614239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.614590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.614621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.614962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.614993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.615359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.615391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.615744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.615778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.616140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.616188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.616545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.616578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.611 [2024-11-20 06:43:56.616786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.611 [2024-11-20 06:43:56.616817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.611 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.617173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.617208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.617564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.617595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.617951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.617982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.618230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.618269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.618623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.618654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.618915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.618946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.619293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.619325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.619644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.619678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.620036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.620067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.620424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.620457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.620824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.620856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.621219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.621252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.621621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.621652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.622018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.622049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.622415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.622449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.622806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.622837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.623194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.623228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.623484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.623515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.623914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.623945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.624300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.624333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.624703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.624735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.625093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.625124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.625502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.625535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.625890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.625924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.626269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.626301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.626664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.626695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.627012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.627044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.627403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.627437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.627792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.627824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.628193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.628225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.628584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.628618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.628972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.612 [2024-11-20 06:43:56.629003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.612 qpair failed and we were unable to recover it. 00:33:36.612 [2024-11-20 06:43:56.629380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.629413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.629774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.629806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.630137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.630180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.630548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.630579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.630934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.630965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.631336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.631370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.631725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.631756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.631998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.632032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.632413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.632447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.632784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.632815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.633246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.633280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.633635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.633673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.634021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.634053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.634430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.634463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.634827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.634859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.635206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.635240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.635609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.635639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.635997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.636028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.636394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.636429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.636844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.636875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.637239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.637271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.637626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.637660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.638001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.638033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.638415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.638447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.638806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.638839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.639195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.639229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.639618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.639650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.640085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.640118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.640514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.640549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.640906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.640937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.641286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.641319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.641715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.641747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.613 [2024-11-20 06:43:56.642102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.613 [2024-11-20 06:43:56.642133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.613 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.642539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.642572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.642929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.642961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.643317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.643350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.643591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.643626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.643977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.644011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.644379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.644412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.644775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.644807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.645174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.645209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.645572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.645604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.645959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.645990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.646368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.646402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.646761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.646794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.647143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.647189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.647599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.647630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.647982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.648015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.648380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.648414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.648799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.648829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.649086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.649120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.649393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.649433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.649672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.649703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.650083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.650114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.650503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.650536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.650990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.651022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.614 [2024-11-20 06:43:56.651388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.614 [2024-11-20 06:43:56.651422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.614 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.651779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.651809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.652181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.652215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.652582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.652613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.652985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.653017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.653385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.653418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.653763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.653793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.654172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.654205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.654523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.654555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.654927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.654961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.655317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.655351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.655705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.655737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.656103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.656134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.656546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.656579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.656934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.656965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.657329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.657364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.657720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.657752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.658122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.658153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.658533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.658566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.658928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.658959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.659366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.659399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.659741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.659772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.660125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.660175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.660502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.660537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.660892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.660924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.661277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.661312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.661544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.661574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.661944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.661975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.662337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.615 [2024-11-20 06:43:56.662369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.615 qpair failed and we were unable to recover it. 00:33:36.615 [2024-11-20 06:43:56.662720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.662752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.663112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.663144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.663512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.663545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.663910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.663942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.664302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.664335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.664704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.664736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.665096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.665139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.665528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.665561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.665916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.665948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.666306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.666340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.666692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.666723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.667077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.667109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.667477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.667511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.667867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.667899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.668263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.668298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.668654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.668687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.669040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.669072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.669424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.669459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.669830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.669862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.670226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.670259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.670633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.670665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.670890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:36.616 [2024-11-20 06:43:56.671022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.671051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.671420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.671451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.671894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.671927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.672157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.672203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.672556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.672588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.672942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.672975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.673342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.673376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.673743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.673774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.674133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.674177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.674532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.674564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.674965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.674995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.616 [2024-11-20 06:43:56.675337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.616 [2024-11-20 06:43:56.675373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.616 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.675715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.675747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.676105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.676136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.676501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.676534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.676891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.676922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.677288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.677322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.677693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.677725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.678089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.678119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.678483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.678516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.678864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.678897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.679252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.679285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.679587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.679618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.679968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.679998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.680333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.680364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.680752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.680784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.681154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.681197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.681463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.681498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.681903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.681935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.682208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.682241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.682614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.682644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.683014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.683046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.683414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.683445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.683688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.683718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.684095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.684125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.684505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.684538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.684791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.684821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.685061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.685095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.685463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.685505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.685908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.685938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.686296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.686333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.686690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.686721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.687063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.687093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.687467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.687502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.687852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.687885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.617 [2024-11-20 06:43:56.688133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.617 [2024-11-20 06:43:56.688179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.617 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.688560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.688592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.688947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.688980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.689221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.689255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.689587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.689618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.689881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.689911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.690236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.690269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.690646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.690677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.691043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.691074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.691377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.691408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.691789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.691822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.692183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.692215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.692588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.692619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.692987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.693019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.693381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.693413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.693777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.693810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.694157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.694215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.694574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.694606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.694962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.694993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.695295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.695328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.695742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.695775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.696129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.696169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.696409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.696440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.696698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.696730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.697109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.697141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.697552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.697584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.697938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.697970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.698328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.698359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.698715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.698747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.698983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.699014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.699381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.699415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.618 qpair failed and we were unable to recover it. 00:33:36.618 [2024-11-20 06:43:56.699766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.618 [2024-11-20 06:43:56.699797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.700169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.700202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.700558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.700595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.701028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.701059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.701490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.701524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.701876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.701908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.702268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.702298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.702529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.702563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.702955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.702986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.703350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.703384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.703757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.703787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.704155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.704200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.704587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.704617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.704989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.705020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.705394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.705428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.705785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.705816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.706181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.706217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.706589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.706619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.706973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.707005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.707344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.707376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.707743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.707774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.708118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.708151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.708540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.708571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.708819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.708849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.709217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.709250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.709621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.709654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.619 [2024-11-20 06:43:56.710010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.619 [2024-11-20 06:43:56.710041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.619 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.710306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.710338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.710693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.710724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.711082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.711119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.711512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.711546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.711908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.711940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.712300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.712333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.712690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.712723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.713081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.713112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.713519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.713552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.713900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.713933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.714174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.714209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.714560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.714592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.714962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.714994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.715350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.715382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.715623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.715654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.716099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.716135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.716393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.716426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.716801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.716831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.717196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.717231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.717615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.717646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.718005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.718038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.718405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.718438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.718800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.718832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.719218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.719250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.719664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.719697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.720046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.720079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.720438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.720469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.720831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.720864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.721222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.721256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.721629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.721662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.722022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.722056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.620 qpair failed and we were unable to recover it. 00:33:36.620 [2024-11-20 06:43:56.722388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.620 [2024-11-20 06:43:56.722419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.722785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.621 [2024-11-20 06:43:56.722832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.621 [2024-11-20 06:43:56.722828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.722842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.621 [2024-11-20 06:43:56.722850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.621 [2024-11-20 06:43:56.722857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.621 [2024-11-20 06:43:56.722858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.723221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.723252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.723525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.723559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.723906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.723938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.724308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.724342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.724692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.724722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.724890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:36.621 [2024-11-20 06:43:56.725059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.725088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.725061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:36.621 [2024-11-20 06:43:56.725245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:36.621 [2024-11-20 06:43:56.725245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:36.621 [2024-11-20 06:43:56.725481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.725513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.725741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.725772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.726175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.726209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.726538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.726570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.726938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.726970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.727328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.727360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.727620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.727649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.728028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.728058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.728272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.728303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.728691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.728722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.729076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.729109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.729474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.729506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.729808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.729839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.730194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.730233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.730614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.730644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.730929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.730960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.731310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.731341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.731697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.731729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.732088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.732118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.732484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.732517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.732783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.732814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.733127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.621 [2024-11-20 06:43:56.733168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.621 qpair failed and we were unable to recover it. 00:33:36.621 [2024-11-20 06:43:56.733514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.733544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.733799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.733829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.734224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.734258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.734499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.734529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.734908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.734939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.735291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.735325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.735592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.735623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.735984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.736017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.736272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.736305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.736691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.736723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.737074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.737105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.737516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.737548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.737896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.737931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.738185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.738222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.738620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.738651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.739007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.739039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.739360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.739393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.739759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.739791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.740149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.740192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.740501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.740533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.740745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.740775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.741184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.741220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.741442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.741473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.741755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.741785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.742136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.742177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.742499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.742530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.742885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.742915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.743269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.743301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.743676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.743707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.744062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.744094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.622 [2024-11-20 06:43:56.744349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.622 [2024-11-20 06:43:56.744381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.622 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.744747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.744786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.745128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.745169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.745516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.745548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.745914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.745944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.746300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.746334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.746747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.746778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.746995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.747025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.747400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.747433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.747780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.747813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.748172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.748205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.748569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.748601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.748953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.748986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.749107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.749141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.749537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.749570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.749947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.749979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.750327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.750362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.750682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.750713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.750960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.750990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.751366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.751399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.751746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.751780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.752178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.752211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.752563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.752595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.752950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.752982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.753338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.753373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.753599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.753633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.753981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.754014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.754229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.754262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.754651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.754683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.755029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.755061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.755400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.755433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.623 qpair failed and we were unable to recover it. 00:33:36.623 [2024-11-20 06:43:56.755662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.623 [2024-11-20 06:43:56.755697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.756052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.756086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.756446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.756478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.756835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.756867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.757179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.757213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.757571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.757602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.757960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.757993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.758338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.758371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.758721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.758754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.759109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.759139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.759279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.759327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.759705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.759736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.760088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.760121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.760495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.760528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.760885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.760919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.761280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.761314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.761672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.761705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.762072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.762103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.762507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.762542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.762889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.762922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.763279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.763312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.763677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.763711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.764054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.764087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.764467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.764499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.764855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.764889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.765229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.765262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.765624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.765655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.765899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.765931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.766144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.766189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.766544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.766575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.766960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.766993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.767348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.767381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.767746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.624 [2024-11-20 06:43:56.767780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.624 qpair failed and we were unable to recover it. 00:33:36.624 [2024-11-20 06:43:56.768180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.768214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.768575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.768610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.768949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.768980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.769307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.769343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.769692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.769724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.770090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.770125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.770510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.770543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.770899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.770935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.771284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.771316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.771689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.771721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.772073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.772104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.772525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.772558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.772925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.772958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.773376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.773409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.773785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.773818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.774182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.774216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.774453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.774484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.774841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.774881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.775235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.775268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.775634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.775665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.776013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.776044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.776409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.776442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.776698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.776729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.777064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.777094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.777535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.777568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.777921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.777952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.625 qpair failed and we were unable to recover it. 00:33:36.625 [2024-11-20 06:43:56.778192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.625 [2024-11-20 06:43:56.778227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.778587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.778619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.778969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.779000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.779108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.779138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.779498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.779531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.779908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.779941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.780219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.780251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.780545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.780576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.780921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.780953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.781308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.781343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.781705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.781736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.781858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.781891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.782123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.782154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.782547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.782581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.782952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.782985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.783329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.783363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.783733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.783766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.784126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.784170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.784394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.784427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.784778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.784812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.785177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.785211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.785576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.785610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.785963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.785996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.786375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.786409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.786768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.786801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.787151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.787207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.787570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.787606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.787856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.787887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.788276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.788312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.788662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.788693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.788909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.788944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.626 qpair failed and we were unable to recover it. 00:33:36.626 [2024-11-20 06:43:56.789323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.626 [2024-11-20 06:43:56.789364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.789724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.789757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.790108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.790141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.790394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.790425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.790783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.790817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.791048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.791080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.791346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.791383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.791513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.791544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.791918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.791950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.792303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.792336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.792599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.792635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.792944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.792977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.793324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.793358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.793726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.793758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.794008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.794039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.794367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.794400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.794658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.794688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.795076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.795108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.795381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.795416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.795779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.795813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.796182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.796215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.796593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.796627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.796864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.796893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.797262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.797295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.797624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.797656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.798002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.798035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.798384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.798422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.798770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.798803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.799153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.799209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.627 [2024-11-20 06:43:56.799581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.627 [2024-11-20 06:43:56.799613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.627 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.799955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.799988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.800376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.800407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.800751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.800785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.801131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.801177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.801538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.801570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.801922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.801953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.802319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.802352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.802710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.802740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.802948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.802978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.803329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.803361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.803704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.803742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.804088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.804119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.804480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.804515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.804722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.804751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.805110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.805141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.805389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.805427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.805674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.805705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.806073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.806104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.806363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.806396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.806754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.806784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.807139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.807183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.807575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.807607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.807952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.807982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.808230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.808262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.808641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.808674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.809025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.809056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.809443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.809476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.809826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.809859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.810205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.810238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.810618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.628 [2024-11-20 06:43:56.810649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.628 qpair failed and we were unable to recover it. 00:33:36.628 [2024-11-20 06:43:56.810903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.810933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.811276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.811308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.811637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.811671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.812015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.812046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.812420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.812455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.812821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.812853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.813199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.813233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.813609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.813641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.814000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.814032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.814410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.814441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.814786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.814817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.815182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.815218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.815433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.815463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.815804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.815837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.816206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.816240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.816476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.816508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.816762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.816793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.817146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.817190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.817516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.817546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.817807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.817837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.818213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.818251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.818624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.818655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.819011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.819043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.819381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.819412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.819760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.819790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.820002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.820031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.820406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.820438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.820786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.820817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.821196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.821231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.821569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.821600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.821806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.821836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.822197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.822229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.822588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.822619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.822961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.822991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.823369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.823402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.823715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.629 [2024-11-20 06:43:56.823745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.629 qpair failed and we were unable to recover it. 00:33:36.629 [2024-11-20 06:43:56.824061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.824091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.824456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.824490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.824831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.824864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.825090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.825120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.825499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.825531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.825876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.825907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.826167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.826200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.826540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.826572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.826951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.826981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.827326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.827357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.827673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.827703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.828061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.828094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.828348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.828380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.828724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.828753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.829110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.829140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.829527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.829558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.829912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.829943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.830042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.830070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d3c000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Read completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 Write completed with error (sct=0, sc=8) 00:33:36.630 starting I/O failed 00:33:36.630 [2024-11-20 06:43:56.830910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.630 [2024-11-20 06:43:56.831426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.831545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.630 [2024-11-20 06:43:56.831944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.630 [2024-11-20 06:43:56.831984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.630 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.832472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.832575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.832886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.832929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.833439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.833570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.833953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.833992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.834215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.834248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.834634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.834667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.835007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.835041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.835411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.835442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.835823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.835856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.836208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.836240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.836615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.836646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.837002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.837035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.837412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.837443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.837808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.837840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.838206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.838237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.838616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.838646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.839026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.839057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.839290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.839321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.839691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.839722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.840078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.840110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.840505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.840537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.840889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.840921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.841303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.841336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.841702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.841734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.841946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.841983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.842371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.842404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.842621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.842651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.843029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.843060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.843283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.843313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.843525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.843556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.843922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.843952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.844304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.844337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.844558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.844589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.844944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.844976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.845187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.845219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.845439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.845469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.845831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.631 [2024-11-20 06:43:56.845861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.631 qpair failed and we were unable to recover it. 00:33:36.631 [2024-11-20 06:43:56.846229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.846263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.846634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.846665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.846918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.846948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.847308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.847340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.847694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.847726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.848082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.848114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.848481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.848512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.848874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.848906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.849265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.849297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.849661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.849690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.850067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.850097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.850311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.850342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.850663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.850695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.851060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.851091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.851492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.851523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.851889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.851920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.852268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.852300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.852661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.852691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.852941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.852970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.853327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.853359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.853594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.853629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.853977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.854007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.854249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.854281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.854517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.854547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.854914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.854946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.855297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.855328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.855691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.855722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.856065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.856102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.856459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.856492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.856739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.856774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.857123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.857156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.857553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.857583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.857892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.857922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.858138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.858180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.858406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.858437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.858838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.858870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.859244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.859276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.859643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.632 [2024-11-20 06:43:56.859674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.632 qpair failed and we were unable to recover it. 00:33:36.632 [2024-11-20 06:43:56.860022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.860055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.860414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.860445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.860806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.860835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.861206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.861237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.861610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.861641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.861990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.862019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.862397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.862428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.862797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.862829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.863050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.863080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.863413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.863443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.863808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.863839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.864087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.864121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.864359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.864389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.864733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.864763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.865101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.865134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.865500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.865531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.865903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.865935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.866281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.866312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.866527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.866556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.866917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.866949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.867320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.867353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.867718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.867747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.868106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.868136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.868419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.868452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.868807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.868838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.869204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.869238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.869605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.869635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.869988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.870019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.870235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.870265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.870643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.870693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.871053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.871084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.871449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.871482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.871834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.633 [2024-11-20 06:43:56.871865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.633 qpair failed and we were unable to recover it. 00:33:36.633 [2024-11-20 06:43:56.872223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.634 [2024-11-20 06:43:56.872255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.634 qpair failed and we were unable to recover it. 00:33:36.634 [2024-11-20 06:43:56.872615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.634 [2024-11-20 06:43:56.872645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.634 qpair failed and we were unable to recover it. 00:33:36.908 [2024-11-20 06:43:56.872996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.908 [2024-11-20 06:43:56.873029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.908 qpair failed and we were unable to recover it. 00:33:36.908 [2024-11-20 06:43:56.873380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.908 [2024-11-20 06:43:56.873415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.908 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.873760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.873789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.873892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.873922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.874238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.874270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.874634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.874664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.875030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.875062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.875286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.875318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.875685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.875718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.876096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.876131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.876378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.876409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.876766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.876797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.877181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.877213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.877557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.877588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.877926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.877955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.878315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.878346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.878706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.878739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.879094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.879125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.879445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.879478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.879835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.879867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.880090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.880121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.880521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.880554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.880913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.880944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.881313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.881346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.881705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.881736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.882091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.882122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.882493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.882526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.882879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.882911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.883272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.883304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.883571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.883602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.909 [2024-11-20 06:43:56.883938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.909 [2024-11-20 06:43:56.883969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.909 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.884340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.884373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.884733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.884765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.885114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.885145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.885478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.885515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.885865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.885897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.886224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.886256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.886605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.886637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.887007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.887038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.887412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.887444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.887767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.887798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.888012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.888042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.888408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.888442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.888649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.888680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.889039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.889071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.889453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.889485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.889841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.889872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.890233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.890264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.890632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.890664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.891021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.891053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.891412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.891445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.891775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.891804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.892170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.892201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.892557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.892590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.892948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.892978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.893230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.893261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.893625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.893656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.894007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.894040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.894426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.894456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.894699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.894728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.895086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.895117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.895520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.910 [2024-11-20 06:43:56.895554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.910 qpair failed and we were unable to recover it. 00:33:36.910 [2024-11-20 06:43:56.895915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.895947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.896310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.896342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.896590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.896619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.896968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.896998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.897380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.897412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.897786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.897818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.898027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.898059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.898412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.898442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.898670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.898700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.899045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.899077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.899425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.899458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.899683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.899714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.900086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.900124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.900523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.900555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.900787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.900818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.901240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.901270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.901603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.901633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.901996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.902027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.902281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.902312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.902553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.902583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.902938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.902967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.903324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.903356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.903603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.903633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.903978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.904009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.904343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.904376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.904750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.904780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.904919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.904948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.905191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.905223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.905556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.905588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.911 qpair failed and we were unable to recover it. 00:33:36.911 [2024-11-20 06:43:56.905927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.911 [2024-11-20 06:43:56.905958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.906289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.906321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.906670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.906702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.907059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.907091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.907467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.907500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.907729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.907761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.908133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.908188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.908559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.908591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.908940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.908972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.909206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.909238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.909540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.909572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.909927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.909957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.910314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.910346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.910699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.910729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.911081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.911111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.911452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.911485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.911839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.911875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.912246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.912279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.912649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.912680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.913038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.913067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.913288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.913319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.913549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.913582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.913932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.913962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.914245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.914283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.914670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.914701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.915028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.915060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.915443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.915476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.915833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.915864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.912 [2024-11-20 06:43:56.916224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.912 [2024-11-20 06:43:56.916257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.912 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.916646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.916676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.917025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.917056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.917270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.917302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.917651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.917683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.918032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.918063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.918447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.918479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.918707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.918737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.918951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.918982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.919392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.919424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.919784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.919815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.920193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.920226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.920589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.920620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.920981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.921011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.921342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.921372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.921584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.921617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.921967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.921999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.922366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.922397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.922762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.922793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.923074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.923103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.923325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.923357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.923751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.923782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.924145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.924188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.924553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.924586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.924936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.924968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.925347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.925379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.925727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.925759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.925978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.926009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.926393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.926425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.926630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.926660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.926883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.913 [2024-11-20 06:43:56.926912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.913 qpair failed and we were unable to recover it. 00:33:36.913 [2024-11-20 06:43:56.927312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.927344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.927710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.927742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.928100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.928132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.928509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.928542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.928894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.928929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.929285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.929317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.929565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.929597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.929950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.929980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.930371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.930403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.930621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.930651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.931014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.931044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.931385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.931417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.931774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.931805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.932167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.932200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.932469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.932503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.932748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.932779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.933015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.933045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.933411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.933443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.933695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.933727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.934079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.934109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.934502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.934535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.934881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.934911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.935292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.935323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.935676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.935706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.936070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.936102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.936331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.936363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.936729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.936760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.936965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.936995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.914 qpair failed and we were unable to recover it. 00:33:36.914 [2024-11-20 06:43:56.937255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.914 [2024-11-20 06:43:56.937287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.937632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.937662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.938016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.938048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.938419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.938451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.938808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.938839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.939063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.939097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.939243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.939281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.939676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.939707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.939958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.939989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.940203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.940238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.940615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.940649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.940945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.940975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.941333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.941365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.941714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.941746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.942111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.942141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.942506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.942537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.942756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.942798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.943180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.943211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.943418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.943447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.943795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.943825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.944090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.944121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.944375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.944407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.944723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.944758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.945019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.945053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.945429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.945461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.945711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.945743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.946092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.946122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.946473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.946506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.946879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.946911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.947280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.947312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.947716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.915 [2024-11-20 06:43:56.947749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.915 qpair failed and we were unable to recover it. 00:33:36.915 [2024-11-20 06:43:56.948090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.948121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.948354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.948385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.948618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.948648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.949026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.949057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.949401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.949435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.949797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.949827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.950192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.950224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.950595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.950624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.950983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.951014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.951244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.951275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.951653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.951683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.952050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.952083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.952464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.952498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.952850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.952880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.953156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.953196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.953420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.953450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.953813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.953843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.954220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.954253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.954481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.954512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.954717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.954747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.955126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.955157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.955532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.955564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.955776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.955805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.956198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.956234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.956451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.956481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.956841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.956879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.957227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.957260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.957495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.957526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.957876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.957909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.916 qpair failed and we were unable to recover it. 00:33:36.916 [2024-11-20 06:43:56.958238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.916 [2024-11-20 06:43:56.958270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.958651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.958681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.959040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.959072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.959422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.959454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.959818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.959848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.960215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.960248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.960640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.960671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.961025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.961055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.961284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.961314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.961469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.961503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.961856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.961886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.962237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.962271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.962652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.962683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.963039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.963070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.963444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.963476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.963841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.963874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.964249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.964283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.964647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.964680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.965045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.965076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.965426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.965460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.965803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.965833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.966199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.966231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.966551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.966581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.966948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.966980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.967364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.967395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.967742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.967773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.967993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.968023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.968414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.968446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.968818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.968852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.969141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.969195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.969413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.969445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.969796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.917 [2024-11-20 06:43:56.969828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.917 qpair failed and we were unable to recover it. 00:33:36.917 [2024-11-20 06:43:56.970047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.970080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.970452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.970487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.970831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.970864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.971212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.971243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.971603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.971639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.971986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.972016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.972362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.972394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.972770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.972801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.972913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.972946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.973332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.973364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.973459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.973489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.974034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.974146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.974457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.974499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.974747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.974780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.975007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.975038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.975410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.975444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.975695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.975726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.976090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.976122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.976520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.976554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.976872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.976905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.977265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.977300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.977651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.977685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.978042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.978074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.978457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.978490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.978741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.978771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.979025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.979064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.979459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.979491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.979875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.979907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.980223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.918 [2024-11-20 06:43:56.980255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.918 qpair failed and we were unable to recover it. 00:33:36.918 [2024-11-20 06:43:56.980626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.980660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.980963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.980994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.981329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.981370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.981628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.981659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.982017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.982049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.982425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.982459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.982825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.982857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.983203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.983236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.983585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.983616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.983844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.983874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.984283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.984315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.984641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.984672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.984772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.984802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.985122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.985157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.985375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.985405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.985748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.985779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.986134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.986178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.986564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.986596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.986972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.987003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.987290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.987322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.987693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.987725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.988083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.988114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.988441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.988472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.988818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.988848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.989199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.989233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.989592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.989622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.989974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.990005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.990341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.990372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.990718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.990748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.991083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.991115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.991505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.991537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.991888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.919 [2024-11-20 06:43:56.991918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.919 qpair failed and we were unable to recover it. 00:33:36.919 [2024-11-20 06:43:56.992269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.992303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.992661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.992692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.993040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.993071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.993430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.993465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.993690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.993722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.994086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.994117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.994485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.994517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.994746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.994777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.995149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.995213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.995556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.995587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.995945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.995976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.996317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.996349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.996583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.996613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.996851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.996882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.997254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.997285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.997622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.997654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.997864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.997892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.998183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.998214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.998578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.998608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.998984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.999014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.999345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.999378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.999734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:56.999764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:56.999989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.920 [2024-11-20 06:43:57.000018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.920 qpair failed and we were unable to recover it. 00:33:36.920 [2024-11-20 06:43:57.000274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.000307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.000409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.000436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fd0c0 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.000653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2e00 is same with the state(6) to be set 00:33:36.921 [2024-11-20 06:43:57.001378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.001484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.001799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.001840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.002409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.002514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.002794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.002836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.003199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.003237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.003650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.003683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.004031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.004064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.004436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.004472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.004834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.004865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.005214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.005248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.005463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.005495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.005846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.005878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.006217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.006250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.006510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.006546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.006911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.006942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.007288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.007323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.007697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.007727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.008103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.008136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.008542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.008574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.008799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.008830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.009219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.009252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.009519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.009554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.009807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.009840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.010215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.010248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.010516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.010548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.010874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.921 [2024-11-20 06:43:57.010906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.921 qpair failed and we were unable to recover it. 00:33:36.921 [2024-11-20 06:43:57.011281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.011316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.011508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.011538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.011882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.011913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.012286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.012321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.012659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.012689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.013066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.013098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.013337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.013370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.013604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.013634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.013874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.013906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.014231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.014264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.014648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.014680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.014910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.014940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.015218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.015253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.015617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.015655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.015907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.015939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.016190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.016224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.016592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.016624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.016868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.016898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.017147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.017186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.017562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.017593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.017914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.017945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.018182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.018215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.018590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.018623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.018973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.019005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.019379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.019411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.019760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.019792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.020144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.020182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.020566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.020597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.020948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.020981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.021342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.021375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.922 qpair failed and we were unable to recover it. 00:33:36.922 [2024-11-20 06:43:57.021720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.922 [2024-11-20 06:43:57.021750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.022110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.022144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.022408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.022441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.022658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.022689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.023088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.023120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.023511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.023546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.023927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.023959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.024305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.024336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.024684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.024715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.024968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.024998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.025414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.025446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.025796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.025828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.026202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.026234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.026578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.026609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.026819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.026849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.027215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.027247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.027617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.027649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.028003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.028035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.028395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.028429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.028771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.028802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.029152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.029194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.029448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.029480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.029830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.029861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.030249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.030289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.030639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.030669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.030920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.030951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.031316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.031348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.031716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.031748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.032124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.032156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.923 qpair failed and we were unable to recover it. 00:33:36.923 [2024-11-20 06:43:57.032374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.923 [2024-11-20 06:43:57.032404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.032782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.032813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.033152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.033192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.033553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.033584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.033803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.033834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.034223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.034257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.034619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.034651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.034974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.035005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.035353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.035388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.035763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.035794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.036008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.036038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.036278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.036309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.036704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.036734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.037129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.037169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.037396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.037428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.037752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.037783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.038146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.038186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.038549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.038579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.038946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.038978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.039373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.039407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.039783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.039814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.040183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.040216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.040534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.040566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.040778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.040809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.041216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.041270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.041659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.041691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.041910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.041941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.042295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.042328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.042680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.042712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.042930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.042961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.043199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.043233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.043617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.043648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.043876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.043905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.044134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.924 [2024-11-20 06:43:57.044171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.924 qpair failed and we were unable to recover it. 00:33:36.924 [2024-11-20 06:43:57.044550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.044593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.044799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.044830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.045086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.045116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.045376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.045412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.045678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.045709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.046090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.046122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.046520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.046552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.046933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.046966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.047340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.047374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.047736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.047767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.048119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.048150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.048488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.048522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.048893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.048923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.049273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.049306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.049646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.049678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.049778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.049808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.050180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.050211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.050561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.050592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.050948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.050979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.051363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.051396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.051607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.051639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.052012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.052045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.052308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.052340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.052677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.052708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.053065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.053099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.053352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.053384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.053731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.053762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.054012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.054042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.054414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.054446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.054800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.054832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.055069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.055102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.925 [2024-11-20 06:43:57.055502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.925 [2024-11-20 06:43:57.055533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.925 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.055903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.055939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.056156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.056197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.056596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.056628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.056972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.057004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.057386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.057418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.057774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.057807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.058189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.058222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.058466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.058497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.058882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.058920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.059243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.059276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.059646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.059677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.060048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.060079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.060277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.060309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.060721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.060751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.061103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.061134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.061522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.061555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.061909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.061941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.062303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.062337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.062725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.062755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.062979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.063009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.063413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.063444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.063794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.063826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.064195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.064229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.064601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.064631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.065000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.065033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.065417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.065450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.926 qpair failed and we were unable to recover it. 00:33:36.926 [2024-11-20 06:43:57.065797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.926 [2024-11-20 06:43:57.065833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.066153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.066193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.066447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.066478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.066829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.066866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.067198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.067233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.067626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.067659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.068008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.068040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.068432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.068463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.068644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.068674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.069029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.069060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.069420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.069452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.069799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.069829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.069923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.069951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.070305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.070338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.070714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.070744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.071103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.071135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.071351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.071382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.071738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.071770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.071988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.072018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.072338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.072369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.072728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.072759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.072990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.073020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.073266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.073303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.073647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.073679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.074035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.074066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.074412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.074447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.074814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.074845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.075194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.075226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.075582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.075613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.927 [2024-11-20 06:43:57.075829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.927 [2024-11-20 06:43:57.075859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.927 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.076207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.076241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.076629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.076659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.076894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.076923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.077214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.077245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.077623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.077654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.078016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.078048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.078420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.078453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.078805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.078836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.079054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.079083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.079450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.079482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.079702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.079731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.079947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.079978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.080323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.080356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.080578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.080608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.080977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.081009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.081354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.081387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.081740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.081770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.082006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.082043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.082302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.082333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.082548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.082578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.082796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.082827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.083200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.083233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.083487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.083516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.083729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.083758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.084109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.084141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.084494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.084525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.084880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.084911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.085027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.085061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.085454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.085486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.085704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.085733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.086105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.086136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.086353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.086384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.928 qpair failed and we were unable to recover it. 00:33:36.928 [2024-11-20 06:43:57.086756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.928 [2024-11-20 06:43:57.086793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.087138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.087178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.087541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.087571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.087923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.087953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.088318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.088350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.088761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.088791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.089134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.089177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.089542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.089574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.089944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.089974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.090198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.090233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.090597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.090627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.090975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.091005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.091352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.091384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.091750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.091780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.092140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.092180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.092590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.092622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.092864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.092898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.093178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.093209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.093446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.093475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.093849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.093880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.094263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.094296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.094650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.094682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.095055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.095086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.095474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.095505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.095879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.095910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.096275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.096307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.096689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.096720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.097067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.097101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.097397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.097434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.929 [2024-11-20 06:43:57.097685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.929 [2024-11-20 06:43:57.097715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.929 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.098061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.098092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.098352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.098384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.098751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.098780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.099016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.099045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.099372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.099404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.099625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.099654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.100033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.100063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.100427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.100460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.100837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.100869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.101214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.101247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.101493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.101530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.101873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.101905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.102261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.102295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.102664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.102695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.103044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.103077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.103444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.103475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.103701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.103730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.104088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.104120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.104466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.104497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.104816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.104849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.105206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.105239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.105504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.105533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.105879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.105909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.106268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.106300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.106676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.106709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.107074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.107105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.107470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.107502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.107743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.107776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.108133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.108185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.108531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.930 [2024-11-20 06:43:57.108561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.930 qpair failed and we were unable to recover it. 00:33:36.930 [2024-11-20 06:43:57.108937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.108968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.109317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.109347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.109553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.109584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.109955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.109986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.110407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.110440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.110685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.110715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.110938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.110968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.111312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.111344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.111704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.111735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.112097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.112128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.112491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.112523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.112898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.112929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.113292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.113324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.113688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.113719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.113968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.113998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.114381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.114412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.114676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.114710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.114919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.114949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.115304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.115336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.115694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.115725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.116070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.116129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.116514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.116547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.116909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.931 [2024-11-20 06:43:57.116939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.931 qpair failed and we were unable to recover it. 00:33:36.931 [2024-11-20 06:43:57.117299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.117332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.117682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.117712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.118064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.118094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.118466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.118501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.118735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.118766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.119013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.119044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.119412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.119444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.119815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.119847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.120215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.120245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.120625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.120658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.120950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.120981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.121355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.121388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.121734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.121766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.122119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.122149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.122508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.122539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.122764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.122794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.123178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.123209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.123454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.123483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.123837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.123867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.124226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.124258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.124622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.124652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.125008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.125040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.125384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.125414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.125759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.125789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.126153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.126196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.126544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.126573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.126954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.126984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.127218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.127254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.127661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.127692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.128038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.128071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.128424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.128455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.128825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.932 [2024-11-20 06:43:57.128857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.932 qpair failed and we were unable to recover it. 00:33:36.932 [2024-11-20 06:43:57.129100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.129131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.129505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.129537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.129901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.129933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.130285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.130317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.130525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.130555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.130904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.130940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.131295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.131327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.131693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.131725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.131970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.132001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.132404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.132437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.132806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.132837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.133190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.133222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.133585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.133615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.133966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.133997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.134384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.134418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.134712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.134742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.134981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.135010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.135391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.135423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.135769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.135801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.136026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.136056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.136349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.136379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.136592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.136623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.136859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.136889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.137267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.137299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.137651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.137682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.137957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.137987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.138332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.138364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.138585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.138616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.138885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.138915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.139267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.139298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.139623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.933 [2024-11-20 06:43:57.139655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.933 qpair failed and we were unable to recover it. 00:33:36.933 [2024-11-20 06:43:57.139979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.140009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.140385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.140417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.140776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.140808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.141188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.141220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.141619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.141649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.141868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.141898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.142251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.142284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.142682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.142711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.143069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.143100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.143320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.143352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.143714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.143745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.144090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.144121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.144506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.144539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.144783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.144812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.145025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.145062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.145305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.145340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.145715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.145745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.146106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.146136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.146505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.146537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.146905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.146936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.147068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.147097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.147446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.147477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.147823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.147855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.148219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.148250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.148641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.148673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.149030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.149061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.149436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.149468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.149821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.934 [2024-11-20 06:43:57.149854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.934 qpair failed and we were unable to recover it. 00:33:36.934 [2024-11-20 06:43:57.149996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.150032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.150247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.150278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.150659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.150689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.151046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.151077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.151436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.151468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.151823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.151855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.152067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.152098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.152478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.152511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.152863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.152894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.153100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.153130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.153486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.153518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.153871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.153902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.154257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.154292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.154542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.154573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.154932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.154963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.155329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.155360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.155716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.155745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.156107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.156139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.156366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.156396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.156749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.156781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.157157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.157198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.157461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.157491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.157874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.157904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.158240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.158273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.158618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.158648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.159007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.159039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.159371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.159401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.159750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.935 [2024-11-20 06:43:57.159781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.935 qpair failed and we were unable to recover it. 00:33:36.935 [2024-11-20 06:43:57.160114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.160145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.160512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.160542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.160902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.160934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.161292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.161324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.161543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.161573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.161887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.161916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.162270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.162302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.162662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.162694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.163046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.163075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.163431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.163464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.163821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.163853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.164204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.164237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.164577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.164607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.164983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.165015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.165298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.165329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.165681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.165711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.166091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.166122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.166487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.166518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.166878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.166909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.167291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.167323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.167678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.167707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.168078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.168110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.168472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.168505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.168739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.168770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.169132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.169172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.169554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.169591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.169945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.169977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.170232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.170264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.170507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.170538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.170882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.170914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.171137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.171176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.171548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.171579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:36.936 [2024-11-20 06:43:57.171952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:36.936 [2024-11-20 06:43:57.171982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:36.936 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.172355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.172390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.172739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.172772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.173130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.173168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.173530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.173560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.173929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.173959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.174317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.174348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.174701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.174733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.175113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.175146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.175568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.175600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.175957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.175987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.176338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.176369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.176725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.176756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.177116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.177148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.177478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.177509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.177864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.177896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.178255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.178287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.178527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.178557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.178907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.178938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.179301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.179333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.179709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.179743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.179957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.179988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.180392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.180424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.180782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.180812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.181180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.181213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.181534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.181565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.181918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.181950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.182283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.182315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.182631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.182664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.213 qpair failed and we were unable to recover it. 00:33:37.213 [2024-11-20 06:43:57.183016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.213 [2024-11-20 06:43:57.183046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.183287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.183321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.183689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.183720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.184075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.184107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.184451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.184489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.184834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.184863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.185203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.185235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.185578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.185608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.185958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.185990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.186328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.186361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.186726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.186757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.187119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.187151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.187531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.187562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.187923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.187953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.188174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.188206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.188568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.188601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.188950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.188982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.189329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.189360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.189762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.189799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.190148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.190188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.190563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.190594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.190943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.190974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.191346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.191376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.191752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.191782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.192137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.192190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.192536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.192568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.192782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.192813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.193171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.193204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.193589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.193621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.193982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.194013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.194395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.194427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.194536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.194565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.194911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.194942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.195297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.195330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.195563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.195595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.195963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.195993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.196329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.214 [2024-11-20 06:43:57.196362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.214 qpair failed and we were unable to recover it. 00:33:37.214 [2024-11-20 06:43:57.196701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.196731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.197096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.197127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.197539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.197571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.197913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.197944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.198306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.198337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.198709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.198740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.199103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.199134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.199497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.199535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.199901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.199932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.200153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.200197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.200550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.200580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.200824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.200854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.201201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.201232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.201549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.201581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.201932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.201963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.202297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.202331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.202653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.202683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.203035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.203066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.203410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.203441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.203804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.203835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.204195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.204228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.204617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.204648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.205037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.205066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.205292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.205322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.205420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.205451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.205850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.205882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.206270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.206302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.206668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.206700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.207051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.207083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.207430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.207462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.207798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.207829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.215 [2024-11-20 06:43:57.208188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.215 [2024-11-20 06:43:57.208220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.215 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.208557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.208587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.208934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.208964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.209330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.209364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.209738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.209768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.210131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.210175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.210561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.210594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.210941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.210970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.211343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.211374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.211721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.211752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.212121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.212152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.212558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.212588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.212815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.212847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.213216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.213247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.213457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.213485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.213846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.213873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.214240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.214277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.214515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.214547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.214870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.214899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.215115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.215143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.215465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.215493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.215879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.215907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.216321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.216350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.216705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.216734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.217105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.217135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.217447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.217479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.217826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.217856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.218226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.218258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.218657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.218690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.219039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.219072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.219446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.219481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.219726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.216 [2024-11-20 06:43:57.219764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.216 qpair failed and we were unable to recover it. 00:33:37.216 [2024-11-20 06:43:57.220109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.220143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.220508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.220542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.220895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.220927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.221181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.221218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.221584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.221616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.221976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.222009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.222391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.222427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.222773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.222806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.223166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.223201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.223454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.223491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.223846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.223882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.224235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.224271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.224649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.224681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.225040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.225073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.225297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.225331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.225623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.225656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.226007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.226040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.226230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.226265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.226623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.226656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.226886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.226921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.227337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.227372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.227585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.227615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.227988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.228019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.228401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.228435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.228803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.228841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.229064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.229093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.229469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.229501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.229852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.229883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.230237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.230268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.230652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.230685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.231033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.231066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.231434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.231466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.231808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.231842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.232192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.232223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.232584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.217 [2024-11-20 06:43:57.232616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.217 qpair failed and we were unable to recover it. 00:33:37.217 [2024-11-20 06:43:57.232833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.232863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.233233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.233267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.233607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.233639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.233896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.233928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.234278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.234309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.234537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.234567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.234933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.234966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.235197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.235230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.235532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.235562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.235911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.235943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.236301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.236334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.236689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.236720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.237078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.237111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.237493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.237527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.237877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.237910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.238144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.238187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.238539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.238571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.238946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.238979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.239304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.239338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.239549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.239578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.239794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.239835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.240219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.240254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.240621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.240655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.241019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.241049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.241412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.218 [2024-11-20 06:43:57.241446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.218 qpair failed and we were unable to recover it. 00:33:37.218 [2024-11-20 06:43:57.241796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.241828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.242054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.242084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.242404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.242440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.242674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.242704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.243070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.243110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.243499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.243531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.243881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.243914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.244270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.244303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.244674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.244706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.245069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.245103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.245473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.245506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.245859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.245889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.246244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.246276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.246489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.246519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.246837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.246868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.247135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.247176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.247569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.247601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.247950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.247981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.248341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.248372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.248724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.248755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.249110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.249145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.249368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.249400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.249612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.249642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.250002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.250037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.250412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.250444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.250794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.250826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.251197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.251231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.251598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.251632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.251995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.252027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.252259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.252294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.252681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.252712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.253067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.253101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.253463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.253496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.253847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.253879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.254245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.219 [2024-11-20 06:43:57.254279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.219 qpair failed and we were unable to recover it. 00:33:37.219 [2024-11-20 06:43:57.254665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.254695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.255077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.255107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.255459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.255493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.255847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.255879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.256228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.256262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.256637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.256667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.256921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.256954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.257304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.257336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.257700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.257732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.258082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.258121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.258499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.258532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.258880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.258911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.259121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.259151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.259554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.259589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.259972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.260004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.260373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.260407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.260761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.260793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.261165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.261198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.261453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.261483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.261829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.261861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.262209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.262244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.262630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.262661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.263037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.263070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.263189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.263221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.263575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.263605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.263836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.263865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.264094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.264130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.264539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.264572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.264920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.264954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.265309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.265341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.265713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.265744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.266111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.266141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.266509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.266541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.266915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.266948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.267193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.267226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.267548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.267580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.267925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.267955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.220 [2024-11-20 06:43:57.268183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.220 [2024-11-20 06:43:57.268215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.220 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.268547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.268577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.268914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.268946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.269196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.269231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.269439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.269472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.269824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.269856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.270217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.270253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.270614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.270645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.270975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.271006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.271221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.271252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.271580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.271614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.272002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.272033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.272396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.272435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.272661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.272691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.273075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.273106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.273327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.273358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.273480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.273513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.273861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.273892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.274249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.274285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.274603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.274633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.274982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.275013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.275400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.275431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.275787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.275816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.276182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.276216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.276441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.276471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.276819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.276848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.277216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.277249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.277600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.277631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.277988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.221 [2024-11-20 06:43:57.278020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.221 qpair failed and we were unable to recover it. 00:33:37.221 [2024-11-20 06:43:57.278234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.278266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.278650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.278681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.279027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.279058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.279407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.279439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.279799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.279829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.280189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.280226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.280446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.280478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.280870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.280901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.281271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.281304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.281689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.281719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.282081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.282111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.282483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.282516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.282892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.282923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.283286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.283321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.283539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.283568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.283930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.283960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.284181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.284211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.284579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.284612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.284977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.285010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.285355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.285387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.285737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.285768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.286125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.286154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.286528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.286558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.286909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.286951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.287311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.287347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.287450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.287480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.287744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.287774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.288003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.288033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.288361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.288393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.288639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.288669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.289016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.289046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.289419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.289453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.289804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.289833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.290051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.290082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.290463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.290495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.290732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.290762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.291142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.222 [2024-11-20 06:43:57.291198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.222 qpair failed and we were unable to recover it. 00:33:37.222 [2024-11-20 06:43:57.291421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.291453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.291830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.291862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.292191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.292223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.292624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.292653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.292868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.292897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.293311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.293343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.293580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.293609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.294016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.294048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.294407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.294439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.294807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.294836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.295202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.295235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.295600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.295632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.295989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.296019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.296243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.296274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.296638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.296669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.297025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.297055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.297386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.297418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.297775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.297807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.298175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.298208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.298572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.298605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.298815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.298846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.299225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.299257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.299470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.299499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.299852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.299882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.300235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.300267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.300475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.300504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.300863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.300901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.301259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.301291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.301663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.301694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.302043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.302074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.302434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.302466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.302678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.223 [2024-11-20 06:43:57.302707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.223 qpair failed and we were unable to recover it. 00:33:37.223 [2024-11-20 06:43:57.303070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.303102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.303488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.303521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.303881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.303913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.304156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.304195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.304445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.304475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.304845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.304878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.305108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.305141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.305523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.305556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.305893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.305925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.306297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.306332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.306680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.306712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.306897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.306931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.307280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.307313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.307672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.307704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.308067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.308100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.308469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.308502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.308795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.308826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.309089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.309120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.309513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.309546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.309796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.309828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.310184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.310217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.310576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.310606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.310956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.310986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.311365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.311397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.311752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.311785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.312156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.312196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.312409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.312439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.312789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.312820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.313173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.313207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.313579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.313609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.313961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.313993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.314338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.314371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.314723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.314754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.315118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.315149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.315527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.224 [2024-11-20 06:43:57.315566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.224 qpair failed and we were unable to recover it. 00:33:37.224 [2024-11-20 06:43:57.315904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.315934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.316293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.316327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.316686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.316718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.316937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.316967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.317324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.317357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.317711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.317745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.318109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.318140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.318510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.318542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.318783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.318818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.319182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.319215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.319570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.319601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.319957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.319988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.320366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.320397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.320651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.320681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.321041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.321074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.321465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.321501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.321863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.321894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.322252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.322285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.322649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.322679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.323030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.323062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.323426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.323457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.323812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.323842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.324204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.324237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.324631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.324662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.325019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.325051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.325415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.325446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.325801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.325832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.326202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.326235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.326477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.326507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.225 qpair failed and we were unable to recover it. 00:33:37.225 [2024-11-20 06:43:57.326841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.225 [2024-11-20 06:43:57.326871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.327100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.327130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.327391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.327428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.327779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.327811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.328177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.328210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.328417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.328447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.328828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.328859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.329207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.329241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.329589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.329621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.329986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.330017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.330379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.330419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.330786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.330817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.331225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.331258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.331542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.331572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.331949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.331982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.332149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.332189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.332585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.332617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.332974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.333007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.333346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.333378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.333734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.333766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.334136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.334176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.334562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.334594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.334819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.334852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.335235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.335268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.335650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.335681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.336041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.336071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.336267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.336299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.336676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.336707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.336930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.336963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.337348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.337381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.337597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.337627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.338000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.338031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.338398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.338437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.338796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.338828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.339045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.339079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.339295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.339327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.339553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.339587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.339807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.226 [2024-11-20 06:43:57.339839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.226 qpair failed and we were unable to recover it. 00:33:37.226 [2024-11-20 06:43:57.340215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.340248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.340630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.340662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.341013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.341045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.341430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.341467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.341821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.341854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.342165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.342199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.342601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.342633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.342851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.342882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.343294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.343326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.343448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.343484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.343829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.343864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.343967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.343999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.344346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.344382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.344734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.344767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.345115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.345147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.345556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.345589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.345942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.345974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.346188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.346222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.346606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.346639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.346990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.347023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.347243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.347277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.347518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.347553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.347899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.347931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.348282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.348317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.348532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.348563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.348778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.348811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.349042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.349075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.349292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.349325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.349686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.349718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.350072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.350105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.350366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.350399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.350778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.350813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.351177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.351213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.351590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.351622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.351968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.351999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.352350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.352383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.227 [2024-11-20 06:43:57.352615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.227 [2024-11-20 06:43:57.352647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.227 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.353004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.353037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.353429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.353463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.353673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.353713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.354075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.354107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.354467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.354501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.354725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.354757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.355112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.355144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.355506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.355539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.355822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.355853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.356223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.356256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.356469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.356500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.356853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.356885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.357234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.357268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.357632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.357663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.358008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.358041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.358404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.358435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.358786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.358817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.359180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.359214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.359584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.359616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.359994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.360025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.360398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.360431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.360783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.360815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.361176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.361208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.361469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.361501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.361724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.361754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.362020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.362055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.362319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.362352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.362634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.362666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.363009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.363039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.363408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.363441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.363787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.363820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.364205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.364239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.228 qpair failed and we were unable to recover it. 00:33:37.228 [2024-11-20 06:43:57.364637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.228 [2024-11-20 06:43:57.364668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.365013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.365043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.365435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.365467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.365823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.365854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.366208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.366243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.366569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.366599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.366865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.366902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.367116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.367147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.367513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.367546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.367804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.367835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.368182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.368223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.368440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.368472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.368837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.368869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.369241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.369273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.369641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.369672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.369917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.369949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.370300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.370332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.370698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.370731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.371106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.371137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.371414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.371445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.371683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.371715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.372065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.372096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.372314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.372346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.372698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.372730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.373071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.373104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.373513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.373545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.373935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.373966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.374339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.374370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.374736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.374768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.375128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.375166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.375438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.375473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.375589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.375620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.375964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.375997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.376213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.376245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.376621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.376653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.377024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.377056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.377427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.377459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.377815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.229 [2024-11-20 06:43:57.377847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.229 qpair failed and we were unable to recover it. 00:33:37.229 [2024-11-20 06:43:57.378214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.378247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.378640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.378671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.378793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.378823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.379062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.379092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.379450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.379482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.379707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.379739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.380144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.380186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.380419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.380449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.380661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.380692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.381027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.381057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.381416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.381449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.381815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.381850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.382141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.382188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.382567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.382599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.382952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.382983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.383362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.383393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.383793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.383823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.383922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.383952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.384301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.384332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.384713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.384743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.385101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.385131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.385393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.385427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.385773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.385803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.386193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.386225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.386529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.386561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.386930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.386961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.387315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.387347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.387703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.387734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.388097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.388129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.388535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.388566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.388837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.388868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.389102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.389136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.389541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.389574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.389794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.389826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.390199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.390231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.390459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.390491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.390820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.390850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.391071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.230 [2024-11-20 06:43:57.391102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.230 qpair failed and we were unable to recover it. 00:33:37.230 [2024-11-20 06:43:57.391463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.391495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.391771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.391807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.392154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.392195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.392479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.392511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:37.231 [2024-11-20 06:43:57.392857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.392889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:33:37.231 [2024-11-20 06:43:57.393238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.393270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:37.231 [2024-11-20 06:43:57.393532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:37.231 [2024-11-20 06:43:57.393563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.231 [2024-11-20 06:43:57.393920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.393951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.394312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.394344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.394690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.394722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.394967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.394997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.395341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.395374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.395597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.395628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.396012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.396043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.396405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.396436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.396800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.396830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.397187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.397220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.397337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.397371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.397689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.397718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.398074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.398105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.398506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.398542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.398787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.398818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.399182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.399215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.399598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.399631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.400001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.400032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.400397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.400429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.400646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.400678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.400914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.400947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.401291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.401323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.401558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.401593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.401940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.401971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.402342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.402375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.402744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.402774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.403140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.403201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.403565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.403597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.231 qpair failed and we were unable to recover it. 00:33:37.231 [2024-11-20 06:43:57.403820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.231 [2024-11-20 06:43:57.403849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.404058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.404089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.404427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.404459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.404820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.404851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.405207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.405239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.405490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.405521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.405816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.405848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.406223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.406256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.406654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.406686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.407044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.407075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.407403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.407440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.407655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.407685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.408056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.408089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.408419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.408452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.408804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.408835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.409196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.409230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.409588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.409619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.410013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.410049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.410426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.410460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.410864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.410897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.411227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.411258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.411625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.411655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.411756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.411785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.412135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.412175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.412532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.412562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.412811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.412843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.413064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.413094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.413487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.413518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.413776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.413807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.414152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.414194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.414577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.414609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.414971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.415004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.415391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.415425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.232 qpair failed and we were unable to recover it. 00:33:37.232 [2024-11-20 06:43:57.415782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.232 [2024-11-20 06:43:57.415815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.416181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.416212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.416573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.416603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.416974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.417008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.417357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.417390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.417751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.417780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.418152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.418207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.418563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.418597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.418927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.418958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.419318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.419351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.419691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.419724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.420119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.420154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.420555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.420587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.420935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.420966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.421326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.421359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.421715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.421747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.422107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.422140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.422366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.422397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.422763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.422797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.423182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.423218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.423578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.423610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.423986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.424020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.424351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.424390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.424689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.424720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.425070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.425107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.425417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.425449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.425794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.425824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.426070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.426101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.426471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.426502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.426863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.426895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.427238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.427271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.427705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.427737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.428056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.428087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.428484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.428516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.428754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.428788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.429187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.429220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.429543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.429574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.429927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.233 [2024-11-20 06:43:57.429957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.233 qpair failed and we were unable to recover it. 00:33:37.233 [2024-11-20 06:43:57.430213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.430244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.430609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.430640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.430992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.431025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.431413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.431447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.431799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.431829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.432053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.432083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.432433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.432466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.432722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.432752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.432983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.433012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.433393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.433427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.433779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.433811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.434036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.434067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.434341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.434372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.434719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.434753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.435111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.435142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.435504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.435536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.435809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.435840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.436189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.436223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.234 [2024-11-20 06:43:57.436584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.436619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:37.234 [2024-11-20 06:43:57.436980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.437012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.234 [2024-11-20 06:43:57.437267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.437301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.437536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.437568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.437930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.437963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.438341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.438373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.438736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.438773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.439092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.439121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.439497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.439528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.439877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.439907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.440228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.440262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.440635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.440666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.441030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.441060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.441480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.441512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.441728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.441757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.441993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.442024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.442393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.442425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.442775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.234 [2024-11-20 06:43:57.442806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.234 qpair failed and we were unable to recover it. 00:33:37.234 [2024-11-20 06:43:57.442940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.442974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.443324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.443356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.443735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.443766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.444122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.444153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.444419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.444449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.444799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.444832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.445194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.445226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.445590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.445621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.445839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.445869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.446241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.446274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.446653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.446683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.447037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.447070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.447511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.447543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.447921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.447952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.448356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.448388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.448744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.448775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.449025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.449055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.449418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.449452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.449702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.449732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.450108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.450138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.450468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.450499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.450833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.450863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.451076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.451107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.451530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.451562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.451774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.451804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.451900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.451929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.452246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.452278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.452652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.452683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.453039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.453075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.453427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.453458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.453665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.453696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.454057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.454088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.454340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.454376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.454589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.454621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.454984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.455014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.455221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.455252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.455619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.455649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.456008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.235 [2024-11-20 06:43:57.456040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.235 qpair failed and we were unable to recover it. 00:33:37.235 [2024-11-20 06:43:57.456407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.456438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.456813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.456843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.457100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.457130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.457377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.457408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.457788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.457820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.458184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.458217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.458348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.458377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.458724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.458754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.459111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.459141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.459516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.459547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.459915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.459946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.460172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.460205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.460527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.460560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.460933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.460964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.461196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.461229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.461553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.461585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.461935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.461967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.462213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.462246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.462496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.462530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.462861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.462894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.463247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.463281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.463671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.463701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.463951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.463982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.464339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.464374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.464732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.464763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.465150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.465194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.465581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.465613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.465969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.466001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.466332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.466363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.466735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.466767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.467131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.467181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.467551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.467583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.467937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.467969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.468323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.468355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.468595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.468625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.468884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.468915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.469279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.469313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.236 [2024-11-20 06:43:57.469578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.236 [2024-11-20 06:43:57.469608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.236 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.469984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.470015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 Malloc0 00:33:37.237 [2024-11-20 06:43:57.470402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.470434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.470716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.470747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.237 [2024-11-20 06:43:57.471090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.471121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.471354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:37.237 [2024-11-20 06:43:57.471386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.237 [2024-11-20 06:43:57.471740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.471772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.471910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.471937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.237 [2024-11-20 06:43:57.472318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.472351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.472746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.472776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.473033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.473063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.473450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.473482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.473831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.473862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.474215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.474247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.474613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.474643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.237 [2024-11-20 06:43:57.475011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.237 [2024-11-20 06:43:57.475043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.237 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.475384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.475418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.475751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.475785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.476165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.476214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.476570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.476604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.476925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.476956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.477208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.477239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.477458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.477466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.502 [2024-11-20 06:43:57.477489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.477849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.477879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.478238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.478270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.478655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.478686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.478922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.478951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.479334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.479366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.479739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.479770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.480146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.480187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.480441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.480470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.480716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.480755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.481101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.481131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.481509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.481541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.481770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.481800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.482150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.482208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.482456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.482486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.482829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.482859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.483242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.483275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.483618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.483647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.484024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.484055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.484431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.484465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.484810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.484841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.485198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.485229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.502 [2024-11-20 06:43:57.485597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.502 [2024-11-20 06:43:57.485628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.502 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.485981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.486014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.486389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.486421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.503 [2024-11-20 06:43:57.486801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.486832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:37.503 [2024-11-20 06:43:57.487217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.503 [2024-11-20 06:43:57.487251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.503 [2024-11-20 06:43:57.487629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.487660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.488044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.488074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.488450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.488483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.488840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.488870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.489093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.489122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.489527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.489558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.489908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.489940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.490286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.490318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.490566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.490596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.490836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.490866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.491214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.491245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.491628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.491659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.492017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.492050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.492425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.492458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.492836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.492867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.493246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.493277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.493520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.493550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.493911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.493944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.494215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.494251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.494617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.494649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.495001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.495039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.495380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.495413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.495772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.495804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.496012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.496043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.496409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.496443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.496771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.496802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.497172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.497204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.497557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.497588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.497763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.497796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.498135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.498174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 [2024-11-20 06:43:57.498291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.503 [2024-11-20 06:43:57.498320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d44000b90 with addr=10.0.0.2, port=4420 00:33:37.503 qpair failed and we were unable to recover it. 00:33:37.503 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.503 [2024-11-20 06:43:57.498851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:37.504 [2024-11-20 06:43:57.498961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.504 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.504 [2024-11-20 06:43:57.499491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.499598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.500073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.500115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.500508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.500543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.500895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.500927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.501046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.501075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.501361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.501397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.501754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.501786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.502193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.502227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.502475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.502508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.502735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.502766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.502991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.503023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.503385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.503419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.503639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.503670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.504047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.504093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.504345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.504384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.504734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.504766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.505130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.505200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.505601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.505635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.505861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.505892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.506279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.506312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.506714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.506746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.507092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.507123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.507383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.507416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.507816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.507848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.508202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.508234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.508623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.508654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.509030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.509062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.509463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.509496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.509856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.509888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.510317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.510350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.504 [2024-11-20 06:43:57.510720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.510751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.504 [2024-11-20 06:43:57.511134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.511178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.504 [2024-11-20 06:43:57.511509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.511543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.504 [2024-11-20 06:43:57.511763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.511795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.512076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.512109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.512356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.512391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.512739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.504 [2024-11-20 06:43:57.512772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.504 qpair failed and we were unable to recover it. 00:33:37.504 [2024-11-20 06:43:57.513116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.513149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.513391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.513426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.513812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.513844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.514072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.514105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.514298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.514341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.514731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.514765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.514996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.515029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.515421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.515456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.515672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.515705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.515958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.515990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.516383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.516418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.516627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.516661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.516980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.517015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.517228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.517262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.517611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.505 [2024-11-20 06:43:57.517650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d48000b90 with addr=10.0.0.2, port=4420 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.517847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.505 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.505 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:37.505 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.505 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.505 [2024-11-20 06:43:57.528773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.505 [2024-11-20 06:43:57.528912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.505 [2024-11-20 06:43:57.528959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.505 [2024-11-20 06:43:57.528981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.505 [2024-11-20 06:43:57.529000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.505 [2024-11-20 06:43:57.529049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.505 06:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3028379 00:33:37.505 [2024-11-20 06:43:57.538586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.505 [2024-11-20 06:43:57.538686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.505 [2024-11-20 06:43:57.538716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.505 [2024-11-20 06:43:57.538731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.505 [2024-11-20 06:43:57.538745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.505 [2024-11-20 06:43:57.538780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.548610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.505 [2024-11-20 06:43:57.548727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.505 [2024-11-20 06:43:57.548756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.505 [2024-11-20 06:43:57.548771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.505 [2024-11-20 06:43:57.548785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.505 [2024-11-20 06:43:57.548815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.558552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.505 [2024-11-20 06:43:57.558645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.505 [2024-11-20 06:43:57.558668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.505 [2024-11-20 06:43:57.558681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.505 [2024-11-20 06:43:57.558690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.505 [2024-11-20 06:43:57.558713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.568576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.505 [2024-11-20 06:43:57.568693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.505 [2024-11-20 06:43:57.568709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.505 [2024-11-20 06:43:57.568719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.505 [2024-11-20 06:43:57.568726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.505 [2024-11-20 06:43:57.568744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.578522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.505 [2024-11-20 06:43:57.578588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.505 [2024-11-20 06:43:57.578602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.505 [2024-11-20 06:43:57.578608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.505 [2024-11-20 06:43:57.578613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.505 [2024-11-20 06:43:57.578625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.588524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.505 [2024-11-20 06:43:57.588584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.505 [2024-11-20 06:43:57.588596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.505 [2024-11-20 06:43:57.588602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.505 [2024-11-20 06:43:57.588608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.505 [2024-11-20 06:43:57.588620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.505 [2024-11-20 06:43:57.598598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.505 [2024-11-20 06:43:57.598656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.505 [2024-11-20 06:43:57.598669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.505 [2024-11-20 06:43:57.598680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.505 [2024-11-20 06:43:57.598686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.505 [2024-11-20 06:43:57.598698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.505 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.608609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.608672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.608685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.608691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.608697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.608709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.618637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.618689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.618702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.618708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.618713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.618725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.628663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.628716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.628728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.628733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.628738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.628750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.638532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.638591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.638606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.638612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.638618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.638634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.648730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.648791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.648804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.648810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.648814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.648827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.658765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.658819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.658832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.658837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.658842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.658854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.668756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.668832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.668844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.668849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.668854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.668866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.678775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.678834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.678856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.678863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.678868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.678884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.688815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.688870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.688883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.688888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.688893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.688905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.698819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.698870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.698892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.698899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.698904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.698919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.708861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.708915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.708936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.708942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.708948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.708964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.718898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.506 [2024-11-20 06:43:57.718953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.506 [2024-11-20 06:43:57.718965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.506 [2024-11-20 06:43:57.718971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.506 [2024-11-20 06:43:57.718976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.506 [2024-11-20 06:43:57.718988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.506 qpair failed and we were unable to recover it. 00:33:37.506 [2024-11-20 06:43:57.728924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.507 [2024-11-20 06:43:57.728982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.507 [2024-11-20 06:43:57.729005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.507 [2024-11-20 06:43:57.729012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.507 [2024-11-20 06:43:57.729017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.507 [2024-11-20 06:43:57.729033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.507 qpair failed and we were unable to recover it. 00:33:37.507 [2024-11-20 06:43:57.739053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.507 [2024-11-20 06:43:57.739110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.507 [2024-11-20 06:43:57.739123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.507 [2024-11-20 06:43:57.739128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.507 [2024-11-20 06:43:57.739133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.507 [2024-11-20 06:43:57.739145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.507 qpair failed and we were unable to recover it. 00:33:37.507 [2024-11-20 06:43:57.748990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.507 [2024-11-20 06:43:57.749032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.507 [2024-11-20 06:43:57.749043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.507 [2024-11-20 06:43:57.749049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.507 [2024-11-20 06:43:57.749053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.507 [2024-11-20 06:43:57.749065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.507 qpair failed and we were unable to recover it. 00:33:37.507 [2024-11-20 06:43:57.759033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.507 [2024-11-20 06:43:57.759088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.507 [2024-11-20 06:43:57.759098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.507 [2024-11-20 06:43:57.759103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.507 [2024-11-20 06:43:57.759108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.507 [2024-11-20 06:43:57.759119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.507 qpair failed and we were unable to recover it. 00:33:37.507 [2024-11-20 06:43:57.769095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.507 [2024-11-20 06:43:57.769187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.507 [2024-11-20 06:43:57.769198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.507 [2024-11-20 06:43:57.769210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.507 [2024-11-20 06:43:57.769218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.507 [2024-11-20 06:43:57.769228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.507 qpair failed and we were unable to recover it. 00:33:37.769 [2024-11-20 06:43:57.778958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.769 [2024-11-20 06:43:57.779005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.769 [2024-11-20 06:43:57.779017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.769 [2024-11-20 06:43:57.779022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.769 [2024-11-20 06:43:57.779027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.769 [2024-11-20 06:43:57.779038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.769 qpair failed and we were unable to recover it. 00:33:37.769 [2024-11-20 06:43:57.789052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.769 [2024-11-20 06:43:57.789093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.769 [2024-11-20 06:43:57.789104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.769 [2024-11-20 06:43:57.789109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.769 [2024-11-20 06:43:57.789115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.769 [2024-11-20 06:43:57.789125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.769 qpair failed and we were unable to recover it. 00:33:37.769 [2024-11-20 06:43:57.799103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.769 [2024-11-20 06:43:57.799152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.769 [2024-11-20 06:43:57.799166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.769 [2024-11-20 06:43:57.799171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.769 [2024-11-20 06:43:57.799176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.769 [2024-11-20 06:43:57.799187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.769 qpair failed and we were unable to recover it. 00:33:37.769 [2024-11-20 06:43:57.809149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.769 [2024-11-20 06:43:57.809206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.769 [2024-11-20 06:43:57.809216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.769 [2024-11-20 06:43:57.809222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.769 [2024-11-20 06:43:57.809226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.769 [2024-11-20 06:43:57.809237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.769 qpair failed and we were unable to recover it. 00:33:37.769 [2024-11-20 06:43:57.819202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.769 [2024-11-20 06:43:57.819287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.769 [2024-11-20 06:43:57.819299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.769 [2024-11-20 06:43:57.819305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.769 [2024-11-20 06:43:57.819310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.769 [2024-11-20 06:43:57.819321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.769 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.829200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.829275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.829287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.829294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.829300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.829310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.839215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.839270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.839280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.839285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.839290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.839300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.849258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.849304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.849314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.849320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.849324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.849335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.859256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.859307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.859321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.859326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.859331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.859342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.869257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.869304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.869314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.869319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.869324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.869335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.879315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.879368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.879378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.879384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.879388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.879399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.889358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.889408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.889418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.889424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.889428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.889439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.899359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.899410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.899420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.899425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.899433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.899444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.909400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.909450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.909460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.909465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.909470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.909480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.919448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.919499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.919509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.919514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.919519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.919529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.929488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.929541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.929551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.929556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.929561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.929571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.939483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.939538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.939547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.939553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.939557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.770 [2024-11-20 06:43:57.939568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.770 qpair failed and we were unable to recover it. 00:33:37.770 [2024-11-20 06:43:57.949515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.770 [2024-11-20 06:43:57.949569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.770 [2024-11-20 06:43:57.949580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.770 [2024-11-20 06:43:57.949585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.770 [2024-11-20 06:43:57.949590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.771 [2024-11-20 06:43:57.949600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.771 qpair failed and we were unable to recover it. 00:33:37.771 [2024-11-20 06:43:57.959536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.771 [2024-11-20 06:43:57.959586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.771 [2024-11-20 06:43:57.959596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.771 [2024-11-20 06:43:57.959602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.771 [2024-11-20 06:43:57.959606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.771 [2024-11-20 06:43:57.959617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.771 qpair failed and we were unable to recover it. 00:33:37.771 [2024-11-20 06:43:57.969587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.771 [2024-11-20 06:43:57.969642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.771 [2024-11-20 06:43:57.969651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.771 [2024-11-20 06:43:57.969657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.771 [2024-11-20 06:43:57.969662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.771 [2024-11-20 06:43:57.969672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.771 qpair failed and we were unable to recover it. 00:33:37.771 [2024-11-20 06:43:57.979524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.771 [2024-11-20 06:43:57.979572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.771 [2024-11-20 06:43:57.979582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.771 [2024-11-20 06:43:57.979588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.771 [2024-11-20 06:43:57.979592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.771 [2024-11-20 06:43:57.979602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.771 qpair failed and we were unable to recover it. 00:33:37.771 [2024-11-20 06:43:57.989645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.771 [2024-11-20 06:43:57.989695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.771 [2024-11-20 06:43:57.989705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.771 [2024-11-20 06:43:57.989711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.771 [2024-11-20 06:43:57.989715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.771 [2024-11-20 06:43:57.989726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.771 qpair failed and we were unable to recover it. 00:33:37.771 [2024-11-20 06:43:57.999655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.771 [2024-11-20 06:43:57.999704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.771 [2024-11-20 06:43:57.999714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.771 [2024-11-20 06:43:57.999719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.771 [2024-11-20 06:43:57.999724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.771 [2024-11-20 06:43:57.999735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.771 qpair failed and we were unable to recover it. 00:33:37.771 [2024-11-20 06:43:58.009690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.771 [2024-11-20 06:43:58.009746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.771 [2024-11-20 06:43:58.009757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.771 [2024-11-20 06:43:58.009762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.771 [2024-11-20 06:43:58.009767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.771 [2024-11-20 06:43:58.009777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.771 qpair failed and we were unable to recover it. 00:33:37.771 [2024-11-20 06:43:58.019701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.771 [2024-11-20 06:43:58.019745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.771 [2024-11-20 06:43:58.019755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.771 [2024-11-20 06:43:58.019761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.771 [2024-11-20 06:43:58.019766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.771 [2024-11-20 06:43:58.019776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.771 qpair failed and we were unable to recover it. 00:33:37.771 [2024-11-20 06:43:58.029742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.771 [2024-11-20 06:43:58.029785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.771 [2024-11-20 06:43:58.029796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.771 [2024-11-20 06:43:58.029805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.771 [2024-11-20 06:43:58.029811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.771 [2024-11-20 06:43:58.029821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.771 qpair failed and we were unable to recover it. 00:33:37.771 [2024-11-20 06:43:58.039778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.771 [2024-11-20 06:43:58.039846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.771 [2024-11-20 06:43:58.039855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.771 [2024-11-20 06:43:58.039861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.771 [2024-11-20 06:43:58.039866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:37.771 [2024-11-20 06:43:58.039876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.771 qpair failed and we were unable to recover it. 00:33:38.033 [2024-11-20 06:43:58.049685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.033 [2024-11-20 06:43:58.049751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.033 [2024-11-20 06:43:58.049761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.033 [2024-11-20 06:43:58.049766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.033 [2024-11-20 06:43:58.049772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.033 [2024-11-20 06:43:58.049783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.033 qpair failed and we were unable to recover it. 00:33:38.033 [2024-11-20 06:43:58.059833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.033 [2024-11-20 06:43:58.059880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.059890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.059895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.059900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.059910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.069850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.069896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.069906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.069911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.069916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.069929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.079889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.079938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.079948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.079954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.079958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.079968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.089922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.089977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.089996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.090003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.090009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.090024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.099940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.099994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.100014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.100020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.100026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.100041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.109961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.110008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.110019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.110025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.110030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.110041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.120000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.120056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.120067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.120073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.120077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.120088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.130016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.130067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.130077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.130083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.130087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.130098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.140048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.140099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.140109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.140114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.140119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.140130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.150101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.150183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.150195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.150200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.150206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.150217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.160131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.160210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.160221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.160231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.160236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.160247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.170161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.170215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.170225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.170230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.170235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.170245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.034 [2024-11-20 06:43:58.180186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.034 [2024-11-20 06:43:58.180234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.034 [2024-11-20 06:43:58.180244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.034 [2024-11-20 06:43:58.180250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.034 [2024-11-20 06:43:58.180255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.034 [2024-11-20 06:43:58.180265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.034 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.190183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.190258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.190269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.190275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.190280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.190291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.200237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.200291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.200301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.200306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.200311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.200325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.210249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.210300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.210310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.210316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.210320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.210331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.220257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.220304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.220314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.220319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.220324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.220334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.230302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.230347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.230357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.230362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.230367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.230377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.240324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.240375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.240384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.240390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.240395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.240405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.250352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.250400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.250410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.250415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.250420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.250430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.260387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.260430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.260440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.260445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.260450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.260461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.270406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.270497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.270507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.270513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.270517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.270529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.280451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.280506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.280516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.280522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.280527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.280538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.290482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.290532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.290544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.290550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.290554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.290565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.035 [2024-11-20 06:43:58.300506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.035 [2024-11-20 06:43:58.300555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.035 [2024-11-20 06:43:58.300565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.035 [2024-11-20 06:43:58.300570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.035 [2024-11-20 06:43:58.300575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.035 [2024-11-20 06:43:58.300585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.035 qpair failed and we were unable to recover it. 00:33:38.298 [2024-11-20 06:43:58.310556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.299 [2024-11-20 06:43:58.310608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.299 [2024-11-20 06:43:58.310618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.299 [2024-11-20 06:43:58.310623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.299 [2024-11-20 06:43:58.310628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.299 [2024-11-20 06:43:58.310639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.299 qpair failed and we were unable to recover it. 00:33:38.299 [2024-11-20 06:43:58.320564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.299 [2024-11-20 06:43:58.320616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.299 [2024-11-20 06:43:58.320626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.299 [2024-11-20 06:43:58.320631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.299 [2024-11-20 06:43:58.320636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.299 [2024-11-20 06:43:58.320646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.299 qpair failed and we were unable to recover it. 00:33:38.299 [2024-11-20 06:43:58.330615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.299 [2024-11-20 06:43:58.330668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.299 [2024-11-20 06:43:58.330679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.299 [2024-11-20 06:43:58.330684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.299 [2024-11-20 06:43:58.330692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.299 [2024-11-20 06:43:58.330703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.299 qpair failed and we were unable to recover it. 00:33:38.299 [2024-11-20 06:43:58.340623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.299 [2024-11-20 06:43:58.340671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.299 [2024-11-20 06:43:58.340681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.299 [2024-11-20 06:43:58.340686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.299 [2024-11-20 06:43:58.340691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.299 [2024-11-20 06:43:58.340702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.299 qpair failed and we were unable to recover it. 00:33:38.299 [2024-11-20 06:43:58.350635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.299 [2024-11-20 06:43:58.350683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.299 [2024-11-20 06:43:58.350693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.299 [2024-11-20 06:43:58.350698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.299 [2024-11-20 06:43:58.350703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.299 [2024-11-20 06:43:58.350714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.299 qpair failed and we were unable to recover it. 00:33:38.299 [2024-11-20 06:43:58.360688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.299 [2024-11-20 06:43:58.360748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.299 [2024-11-20 06:43:58.360758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.299 [2024-11-20 06:43:58.360763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.299 [2024-11-20 06:43:58.360768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.299 [2024-11-20 06:43:58.360779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.299 qpair failed and we were unable to recover it. 00:33:38.299 [2024-11-20 06:43:58.370712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.299 [2024-11-20 06:43:58.370765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.299 [2024-11-20 06:43:58.370775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.299 [2024-11-20 06:43:58.370781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.299 [2024-11-20 06:43:58.370786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.299 [2024-11-20 06:43:58.370796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.299 qpair failed and we were unable to recover it. 00:33:38.299 [2024-11-20 06:43:58.380732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.299 [2024-11-20 06:43:58.380777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.299 [2024-11-20 06:43:58.380788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.299 [2024-11-20 06:43:58.380793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.299 [2024-11-20 06:43:58.380797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.299 [2024-11-20 06:43:58.380808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.299 qpair failed and we were unable to recover it. 00:33:38.299 [2024-11-20 06:43:58.390752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.299 [2024-11-20 06:43:58.390801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.299 [2024-11-20 06:43:58.390811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.299 [2024-11-20 06:43:58.390817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.299 [2024-11-20 06:43:58.390821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.299 [2024-11-20 06:43:58.390832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.299 qpair failed and we were unable to recover it. 00:33:38.299 [2024-11-20 06:43:58.400768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.299 [2024-11-20 06:43:58.400822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.299 [2024-11-20 06:43:58.400832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.299 [2024-11-20 06:43:58.400837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.299 [2024-11-20 06:43:58.400842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.299 [2024-11-20 06:43:58.400852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.299 qpair failed and we were unable to recover it. 00:33:38.299 [2024-11-20 06:43:58.410819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.410870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.410884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.410890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.410894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.410906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.300 qpair failed and we were unable to recover it. 00:33:38.300 [2024-11-20 06:43:58.420831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.420887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.420910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.420917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.420923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.420937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.300 qpair failed and we were unable to recover it. 00:33:38.300 [2024-11-20 06:43:58.430855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.430908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.430920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.430926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.430931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.430943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.300 qpair failed and we were unable to recover it. 00:33:38.300 [2024-11-20 06:43:58.440894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.440952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.440963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.440968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.440973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.440984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.300 qpair failed and we were unable to recover it. 00:33:38.300 [2024-11-20 06:43:58.450940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.451012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.451025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.451031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.451036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.451048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.300 qpair failed and we were unable to recover it. 00:33:38.300 [2024-11-20 06:43:58.460949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.460995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.461005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.461011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.461019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.461030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.300 qpair failed and we were unable to recover it. 00:33:38.300 [2024-11-20 06:43:58.470946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.470999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.471009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.471015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.471019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.471030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.300 qpair failed and we were unable to recover it. 00:33:38.300 [2024-11-20 06:43:58.480891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.480942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.480953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.480958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.480963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.480974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.300 qpair failed and we were unable to recover it. 00:33:38.300 [2024-11-20 06:43:58.491008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.491060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.491070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.491075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.491080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.491091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.300 qpair failed and we were unable to recover it. 00:33:38.300 [2024-11-20 06:43:58.501062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.501152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.501168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.501173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.501179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.501190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.300 qpair failed and we were unable to recover it. 00:33:38.300 [2024-11-20 06:43:58.511098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.300 [2024-11-20 06:43:58.511144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.300 [2024-11-20 06:43:58.511154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.300 [2024-11-20 06:43:58.511163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.300 [2024-11-20 06:43:58.511168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.300 [2024-11-20 06:43:58.511178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.301 qpair failed and we were unable to recover it. 00:33:38.301 [2024-11-20 06:43:58.521118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.301 [2024-11-20 06:43:58.521200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.301 [2024-11-20 06:43:58.521211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.301 [2024-11-20 06:43:58.521217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.301 [2024-11-20 06:43:58.521221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.301 [2024-11-20 06:43:58.521232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.301 qpair failed and we were unable to recover it. 00:33:38.301 [2024-11-20 06:43:58.531088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.301 [2024-11-20 06:43:58.531139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.301 [2024-11-20 06:43:58.531150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.301 [2024-11-20 06:43:58.531155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.301 [2024-11-20 06:43:58.531163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.301 [2024-11-20 06:43:58.531175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.301 qpair failed and we were unable to recover it. 00:33:38.301 [2024-11-20 06:43:58.541183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.301 [2024-11-20 06:43:58.541231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.301 [2024-11-20 06:43:58.541241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.301 [2024-11-20 06:43:58.541246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.301 [2024-11-20 06:43:58.541251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.301 [2024-11-20 06:43:58.541262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.301 qpair failed and we were unable to recover it. 00:33:38.301 [2024-11-20 06:43:58.551203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.301 [2024-11-20 06:43:58.551256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.301 [2024-11-20 06:43:58.551266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.301 [2024-11-20 06:43:58.551271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.301 [2024-11-20 06:43:58.551276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.301 [2024-11-20 06:43:58.551287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.301 qpair failed and we were unable to recover it. 00:33:38.301 [2024-11-20 06:43:58.561236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.301 [2024-11-20 06:43:58.561282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.301 [2024-11-20 06:43:58.561292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.301 [2024-11-20 06:43:58.561297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.301 [2024-11-20 06:43:58.561302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.301 [2024-11-20 06:43:58.561313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.301 qpair failed and we were unable to recover it. 00:33:38.301 [2024-11-20 06:43:58.571272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.301 [2024-11-20 06:43:58.571323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.301 [2024-11-20 06:43:58.571333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.301 [2024-11-20 06:43:58.571338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.301 [2024-11-20 06:43:58.571342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.301 [2024-11-20 06:43:58.571354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.301 qpair failed and we were unable to recover it. 00:33:38.564 [2024-11-20 06:43:58.581165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.564 [2024-11-20 06:43:58.581215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.564 [2024-11-20 06:43:58.581225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.564 [2024-11-20 06:43:58.581231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.564 [2024-11-20 06:43:58.581236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.564 [2024-11-20 06:43:58.581246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.564 qpair failed and we were unable to recover it. 00:33:38.564 [2024-11-20 06:43:58.591336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.564 [2024-11-20 06:43:58.591420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.564 [2024-11-20 06:43:58.591432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.564 [2024-11-20 06:43:58.591440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.564 [2024-11-20 06:43:58.591445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.564 [2024-11-20 06:43:58.591456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.564 qpair failed and we were unable to recover it. 00:33:38.564 [2024-11-20 06:43:58.601376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.564 [2024-11-20 06:43:58.601431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.564 [2024-11-20 06:43:58.601441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.564 [2024-11-20 06:43:58.601447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.564 [2024-11-20 06:43:58.601452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.564 [2024-11-20 06:43:58.601462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.564 qpair failed and we were unable to recover it. 00:33:38.564 [2024-11-20 06:43:58.611391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.564 [2024-11-20 06:43:58.611436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.564 [2024-11-20 06:43:58.611447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.564 [2024-11-20 06:43:58.611452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.564 [2024-11-20 06:43:58.611457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.564 [2024-11-20 06:43:58.611467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.564 qpair failed and we were unable to recover it. 00:33:38.564 [2024-11-20 06:43:58.621371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.564 [2024-11-20 06:43:58.621437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.564 [2024-11-20 06:43:58.621448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.564 [2024-11-20 06:43:58.621453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.564 [2024-11-20 06:43:58.621458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.564 [2024-11-20 06:43:58.621469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.564 qpair failed and we were unable to recover it. 00:33:38.564 [2024-11-20 06:43:58.631434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.564 [2024-11-20 06:43:58.631482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.564 [2024-11-20 06:43:58.631492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.631497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.631502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.631516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.641456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.641555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.641566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.641571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.641576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.641587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.651500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.651586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.651596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.651602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.651607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.651618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.661511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.661566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.661576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.661581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.661586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.661597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.671529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.671576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.671586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.671592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.671596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.671607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.681585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.681638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.681648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.681653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.681658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.681668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.691479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.691541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.691552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.691557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.691562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.691572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.701595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.701640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.701650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.701656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.701660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.701670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.711634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.711683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.711694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.711699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.711704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.711714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.721682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.721743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.721775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.721781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.721786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.721804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.731711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.731762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.731773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.731778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.731783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.731794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.741770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.741823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.741833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.741838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.741843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.741854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.751762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.751808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.751818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.565 [2024-11-20 06:43:58.751824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.565 [2024-11-20 06:43:58.751828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.565 [2024-11-20 06:43:58.751839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.565 qpair failed and we were unable to recover it. 00:33:38.565 [2024-11-20 06:43:58.761789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.565 [2024-11-20 06:43:58.761840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.565 [2024-11-20 06:43:58.761850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.566 [2024-11-20 06:43:58.761855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.566 [2024-11-20 06:43:58.761860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.566 [2024-11-20 06:43:58.761875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.566 qpair failed and we were unable to recover it. 00:33:38.566 [2024-11-20 06:43:58.771693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.566 [2024-11-20 06:43:58.771748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.566 [2024-11-20 06:43:58.771760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.566 [2024-11-20 06:43:58.771765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.566 [2024-11-20 06:43:58.771771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.566 [2024-11-20 06:43:58.771781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.566 qpair failed and we were unable to recover it. 00:33:38.566 [2024-11-20 06:43:58.781839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.566 [2024-11-20 06:43:58.781927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.566 [2024-11-20 06:43:58.781938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.566 [2024-11-20 06:43:58.781943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.566 [2024-11-20 06:43:58.781948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.566 [2024-11-20 06:43:58.781959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.566 qpair failed and we were unable to recover it. 00:33:38.566 [2024-11-20 06:43:58.791837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.566 [2024-11-20 06:43:58.791886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.566 [2024-11-20 06:43:58.791897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.566 [2024-11-20 06:43:58.791902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.566 [2024-11-20 06:43:58.791906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.566 [2024-11-20 06:43:58.791917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.566 qpair failed and we were unable to recover it. 00:33:38.566 [2024-11-20 06:43:58.801873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.566 [2024-11-20 06:43:58.801927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.566 [2024-11-20 06:43:58.801938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.566 [2024-11-20 06:43:58.801943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.566 [2024-11-20 06:43:58.801948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.566 [2024-11-20 06:43:58.801959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.566 qpair failed and we were unable to recover it. 00:33:38.566 [2024-11-20 06:43:58.811923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.566 [2024-11-20 06:43:58.811981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.566 [2024-11-20 06:43:58.811991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.566 [2024-11-20 06:43:58.811997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.566 [2024-11-20 06:43:58.812001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.566 [2024-11-20 06:43:58.812012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.566 qpair failed and we were unable to recover it. 00:33:38.566 [2024-11-20 06:43:58.821903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.566 [2024-11-20 06:43:58.821951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.566 [2024-11-20 06:43:58.821961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.566 [2024-11-20 06:43:58.821966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.566 [2024-11-20 06:43:58.821971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.566 [2024-11-20 06:43:58.821982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.566 qpair failed and we were unable to recover it. 00:33:38.566 [2024-11-20 06:43:58.831956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.566 [2024-11-20 06:43:58.832010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.566 [2024-11-20 06:43:58.832021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.566 [2024-11-20 06:43:58.832029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.566 [2024-11-20 06:43:58.832034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.566 [2024-11-20 06:43:58.832046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.566 qpair failed and we were unable to recover it. 00:33:38.828 [2024-11-20 06:43:58.842013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.828 [2024-11-20 06:43:58.842069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.828 [2024-11-20 06:43:58.842079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.828 [2024-11-20 06:43:58.842085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.828 [2024-11-20 06:43:58.842089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.828 [2024-11-20 06:43:58.842100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.828 qpair failed and we were unable to recover it. 00:33:38.828 [2024-11-20 06:43:58.852024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.828 [2024-11-20 06:43:58.852075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.828 [2024-11-20 06:43:58.852088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.828 [2024-11-20 06:43:58.852093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.828 [2024-11-20 06:43:58.852098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.828 [2024-11-20 06:43:58.852109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.828 qpair failed and we were unable to recover it. 00:33:38.828 [2024-11-20 06:43:58.862030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.828 [2024-11-20 06:43:58.862074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.828 [2024-11-20 06:43:58.862085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.828 [2024-11-20 06:43:58.862090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.828 [2024-11-20 06:43:58.862095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.828 [2024-11-20 06:43:58.862105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.828 qpair failed and we were unable to recover it. 00:33:38.828 [2024-11-20 06:43:58.872039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.828 [2024-11-20 06:43:58.872089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.828 [2024-11-20 06:43:58.872099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.828 [2024-11-20 06:43:58.872105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.828 [2024-11-20 06:43:58.872109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.828 [2024-11-20 06:43:58.872120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.828 qpair failed and we were unable to recover it. 00:33:38.828 [2024-11-20 06:43:58.882103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.828 [2024-11-20 06:43:58.882160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.828 [2024-11-20 06:43:58.882171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.828 [2024-11-20 06:43:58.882176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.828 [2024-11-20 06:43:58.882181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.828 [2024-11-20 06:43:58.882191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.828 qpair failed and we were unable to recover it. 00:33:38.828 [2024-11-20 06:43:58.892169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.828 [2024-11-20 06:43:58.892247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.892258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.892263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.892271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.892282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:58.902144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:58.902194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.902204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.902209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.902214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.902224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:58.912176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:58.912227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.912238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.912243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.912248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.912258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:58.922204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:58.922258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.922268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.922273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.922278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.922288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:58.932266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:58.932349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.932360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.932366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.932371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.932381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:58.942298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:58.942387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.942398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.942404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.942409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.942420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:58.952266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:58.952313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.952323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.952328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.952333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.952344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:58.962339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:58.962390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.962401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.962406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.962411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.962421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:58.972441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:58.972492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.972502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.972507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.972512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.972522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:58.982379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:58.982468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.982483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.982489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.982494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.982505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:58.992418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:58.992465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:58.992476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:58.992482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:58.992487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:58.992497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:59.002463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:59.002512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:59.002522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:59.002527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:59.002532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:59.002543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.829 qpair failed and we were unable to recover it. 00:33:38.829 [2024-11-20 06:43:59.012479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.829 [2024-11-20 06:43:59.012538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.829 [2024-11-20 06:43:59.012548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.829 [2024-11-20 06:43:59.012553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.829 [2024-11-20 06:43:59.012558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.829 [2024-11-20 06:43:59.012568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.830 qpair failed and we were unable to recover it. 00:33:38.830 [2024-11-20 06:43:59.022475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.830 [2024-11-20 06:43:59.022522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.830 [2024-11-20 06:43:59.022532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.830 [2024-11-20 06:43:59.022540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.830 [2024-11-20 06:43:59.022545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.830 [2024-11-20 06:43:59.022556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.830 qpair failed and we were unable to recover it. 00:33:38.830 [2024-11-20 06:43:59.032539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.830 [2024-11-20 06:43:59.032590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.830 [2024-11-20 06:43:59.032600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.830 [2024-11-20 06:43:59.032605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.830 [2024-11-20 06:43:59.032610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.830 [2024-11-20 06:43:59.032620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.830 qpair failed and we were unable to recover it. 00:33:38.830 [2024-11-20 06:43:59.042424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.830 [2024-11-20 06:43:59.042470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.830 [2024-11-20 06:43:59.042481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.830 [2024-11-20 06:43:59.042487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.830 [2024-11-20 06:43:59.042493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.830 [2024-11-20 06:43:59.042503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.830 qpair failed and we were unable to recover it. 00:33:38.830 [2024-11-20 06:43:59.052585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.830 [2024-11-20 06:43:59.052636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.830 [2024-11-20 06:43:59.052646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.830 [2024-11-20 06:43:59.052651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.830 [2024-11-20 06:43:59.052656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.830 [2024-11-20 06:43:59.052667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.830 qpair failed and we were unable to recover it. 00:33:38.830 [2024-11-20 06:43:59.062601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.830 [2024-11-20 06:43:59.062656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.830 [2024-11-20 06:43:59.062665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.830 [2024-11-20 06:43:59.062670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.830 [2024-11-20 06:43:59.062675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.830 [2024-11-20 06:43:59.062685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.830 qpair failed and we were unable to recover it. 00:33:38.830 [2024-11-20 06:43:59.072630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.830 [2024-11-20 06:43:59.072675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.830 [2024-11-20 06:43:59.072685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.830 [2024-11-20 06:43:59.072691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.830 [2024-11-20 06:43:59.072695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.830 [2024-11-20 06:43:59.072705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.830 qpair failed and we were unable to recover it. 00:33:38.830 [2024-11-20 06:43:59.082644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.830 [2024-11-20 06:43:59.082699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.830 [2024-11-20 06:43:59.082709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.830 [2024-11-20 06:43:59.082714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.830 [2024-11-20 06:43:59.082719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.830 [2024-11-20 06:43:59.082730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.830 qpair failed and we were unable to recover it. 00:33:38.830 [2024-11-20 06:43:59.092714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.830 [2024-11-20 06:43:59.092764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.830 [2024-11-20 06:43:59.092774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.830 [2024-11-20 06:43:59.092779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.830 [2024-11-20 06:43:59.092783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.830 [2024-11-20 06:43:59.092794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.830 qpair failed and we were unable to recover it. 00:33:38.830 [2024-11-20 06:43:59.102708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:38.830 [2024-11-20 06:43:59.102752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:38.830 [2024-11-20 06:43:59.102762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:38.830 [2024-11-20 06:43:59.102767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:38.830 [2024-11-20 06:43:59.102772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:38.830 [2024-11-20 06:43:59.102783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:38.830 qpair failed and we were unable to recover it. 00:33:39.093 [2024-11-20 06:43:59.112755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.093 [2024-11-20 06:43:59.112807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.093 [2024-11-20 06:43:59.112818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.093 [2024-11-20 06:43:59.112824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.093 [2024-11-20 06:43:59.112828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.093 [2024-11-20 06:43:59.112839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.093 qpair failed and we were unable to recover it. 00:33:39.093 [2024-11-20 06:43:59.122755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.093 [2024-11-20 06:43:59.122806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.093 [2024-11-20 06:43:59.122816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.093 [2024-11-20 06:43:59.122822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.093 [2024-11-20 06:43:59.122826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.093 [2024-11-20 06:43:59.122837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.093 qpair failed and we were unable to recover it. 00:33:39.093 [2024-11-20 06:43:59.132828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.093 [2024-11-20 06:43:59.132877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.093 [2024-11-20 06:43:59.132887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.093 [2024-11-20 06:43:59.132892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.093 [2024-11-20 06:43:59.132897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.093 [2024-11-20 06:43:59.132907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.093 qpair failed and we were unable to recover it. 00:33:39.093 [2024-11-20 06:43:59.142831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.093 [2024-11-20 06:43:59.142880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.093 [2024-11-20 06:43:59.142900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.093 [2024-11-20 06:43:59.142906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.093 [2024-11-20 06:43:59.142911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.093 [2024-11-20 06:43:59.142928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.093 qpair failed and we were unable to recover it. 00:33:39.093 [2024-11-20 06:43:59.152818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.093 [2024-11-20 06:43:59.152867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.093 [2024-11-20 06:43:59.152878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.093 [2024-11-20 06:43:59.152887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.093 [2024-11-20 06:43:59.152892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.093 [2024-11-20 06:43:59.152904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.093 qpair failed and we were unable to recover it. 00:33:39.093 [2024-11-20 06:43:59.162909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.093 [2024-11-20 06:43:59.162960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.093 [2024-11-20 06:43:59.162971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.093 [2024-11-20 06:43:59.162976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.093 [2024-11-20 06:43:59.162981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.093 [2024-11-20 06:43:59.162992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.093 qpair failed and we were unable to recover it. 00:33:39.093 [2024-11-20 06:43:59.172932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.093 [2024-11-20 06:43:59.172978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.093 [2024-11-20 06:43:59.172989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.093 [2024-11-20 06:43:59.172994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.093 [2024-11-20 06:43:59.172999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.093 [2024-11-20 06:43:59.173009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.093 qpair failed and we were unable to recover it. 00:33:39.093 [2024-11-20 06:43:59.182952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.093 [2024-11-20 06:43:59.183002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.093 [2024-11-20 06:43:59.183012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.093 [2024-11-20 06:43:59.183017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.093 [2024-11-20 06:43:59.183022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.093 [2024-11-20 06:43:59.183032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.093 qpair failed and we were unable to recover it. 00:33:39.093 [2024-11-20 06:43:59.193026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.093 [2024-11-20 06:43:59.193075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.093 [2024-11-20 06:43:59.193085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.093 [2024-11-20 06:43:59.193091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.093 [2024-11-20 06:43:59.193096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.093 [2024-11-20 06:43:59.193109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.093 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.203009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.203057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.203067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.203072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.203077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.203088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.213045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.213098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.213108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.213114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.213119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.213129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.223066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.223163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.223173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.223178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.223183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.223194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.233093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.233140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.233150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.233155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.233164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.233174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.243125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.243185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.243195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.243200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.243205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.243216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.253135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.253189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.253199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.253204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.253209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.253219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.263185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.263241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.263250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.263255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.263260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.263271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.273174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.273220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.273230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.273236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.273240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.273251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.283235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.283290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.283303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.283308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.283313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.283324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.293277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.293326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.293336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.293341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.293346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.293357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.303275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.303329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.303338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.303344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.303349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.303359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.313312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.313364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.313374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.313379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.313383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.313394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.323357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.094 [2024-11-20 06:43:59.323407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.094 [2024-11-20 06:43:59.323417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.094 [2024-11-20 06:43:59.323422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.094 [2024-11-20 06:43:59.323426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.094 [2024-11-20 06:43:59.323440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.094 qpair failed and we were unable to recover it. 00:33:39.094 [2024-11-20 06:43:59.333392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.095 [2024-11-20 06:43:59.333438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.095 [2024-11-20 06:43:59.333448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.095 [2024-11-20 06:43:59.333453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.095 [2024-11-20 06:43:59.333458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.095 [2024-11-20 06:43:59.333469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.095 qpair failed and we were unable to recover it. 00:33:39.095 [2024-11-20 06:43:59.343396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.095 [2024-11-20 06:43:59.343443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.095 [2024-11-20 06:43:59.343452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.095 [2024-11-20 06:43:59.343457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.095 [2024-11-20 06:43:59.343462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.095 [2024-11-20 06:43:59.343472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.095 qpair failed and we were unable to recover it. 00:33:39.095 [2024-11-20 06:43:59.353435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.095 [2024-11-20 06:43:59.353485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.095 [2024-11-20 06:43:59.353495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.095 [2024-11-20 06:43:59.353500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.095 [2024-11-20 06:43:59.353504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.095 [2024-11-20 06:43:59.353515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.095 qpair failed and we were unable to recover it. 00:33:39.095 [2024-11-20 06:43:59.363445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.095 [2024-11-20 06:43:59.363493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.095 [2024-11-20 06:43:59.363503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.095 [2024-11-20 06:43:59.363508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.095 [2024-11-20 06:43:59.363513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.095 [2024-11-20 06:43:59.363523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.095 qpair failed and we were unable to recover it. 00:33:39.357 [2024-11-20 06:43:59.373476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.357 [2024-11-20 06:43:59.373528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.357 [2024-11-20 06:43:59.373539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.357 [2024-11-20 06:43:59.373545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.357 [2024-11-20 06:43:59.373550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.357 [2024-11-20 06:43:59.373560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.357 qpair failed and we were unable to recover it. 00:33:39.357 [2024-11-20 06:43:59.383514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.357 [2024-11-20 06:43:59.383558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.357 [2024-11-20 06:43:59.383568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.357 [2024-11-20 06:43:59.383573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.357 [2024-11-20 06:43:59.383578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.357 [2024-11-20 06:43:59.383588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.357 qpair failed and we were unable to recover it. 00:33:39.357 [2024-11-20 06:43:59.393571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.357 [2024-11-20 06:43:59.393618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.357 [2024-11-20 06:43:59.393628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.357 [2024-11-20 06:43:59.393633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.393638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.393648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.403576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.403625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.403635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.403640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.403645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.403655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.413623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.413675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.413688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.413693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.413698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.413708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.423651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.423707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.423717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.423722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.423726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.423737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.433663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.433706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.433716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.433721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.433726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.433737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.443562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.443625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.443636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.443641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.443646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.443657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.453738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.453807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.453818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.453823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.453830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.453841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.463752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.463797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.463807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.463812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.463817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.463827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.473783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.473834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.473844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.473849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.473853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.473864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.483806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.483856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.483866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.483871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.483876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.483887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.493833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.493885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.493895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.493900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.493905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.493916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.503876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.503929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.503939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.503945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.503950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.503962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.513881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.513945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.513964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.358 [2024-11-20 06:43:59.513970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.358 [2024-11-20 06:43:59.513975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.358 [2024-11-20 06:43:59.513990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.358 qpair failed and we were unable to recover it. 00:33:39.358 [2024-11-20 06:43:59.523802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.358 [2024-11-20 06:43:59.523856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.358 [2024-11-20 06:43:59.523868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.523873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.523878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.523889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.359 [2024-11-20 06:43:59.533964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.359 [2024-11-20 06:43:59.534020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.359 [2024-11-20 06:43:59.534031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.534036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.534040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.534052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.359 [2024-11-20 06:43:59.543985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.359 [2024-11-20 06:43:59.544075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.359 [2024-11-20 06:43:59.544089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.544095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.544100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.544111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.359 [2024-11-20 06:43:59.553987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.359 [2024-11-20 06:43:59.554034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.359 [2024-11-20 06:43:59.554044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.554049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.554054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.554065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.359 [2024-11-20 06:43:59.564017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.359 [2024-11-20 06:43:59.564066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.359 [2024-11-20 06:43:59.564076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.564081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.564086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.564097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.359 [2024-11-20 06:43:59.573933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.359 [2024-11-20 06:43:59.574022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.359 [2024-11-20 06:43:59.574032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.574038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.574043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.574054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.359 [2024-11-20 06:43:59.584108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.359 [2024-11-20 06:43:59.584172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.359 [2024-11-20 06:43:59.584183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.584191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.584196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.584207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.359 [2024-11-20 06:43:59.594099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.359 [2024-11-20 06:43:59.594148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.359 [2024-11-20 06:43:59.594161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.594166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.594171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.594182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.359 [2024-11-20 06:43:59.604046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.359 [2024-11-20 06:43:59.604100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.359 [2024-11-20 06:43:59.604111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.604116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.604121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.604131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.359 [2024-11-20 06:43:59.614146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.359 [2024-11-20 06:43:59.614200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.359 [2024-11-20 06:43:59.614211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.614216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.614221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.614231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.359 [2024-11-20 06:43:59.624185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.359 [2024-11-20 06:43:59.624229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.359 [2024-11-20 06:43:59.624239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.359 [2024-11-20 06:43:59.624244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.359 [2024-11-20 06:43:59.624249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.359 [2024-11-20 06:43:59.624260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.359 qpair failed and we were unable to recover it. 00:33:39.622 [2024-11-20 06:43:59.634212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.622 [2024-11-20 06:43:59.634260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.622 [2024-11-20 06:43:59.634270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.622 [2024-11-20 06:43:59.634275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.622 [2024-11-20 06:43:59.634280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.622 [2024-11-20 06:43:59.634290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-11-20 06:43:59.644244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.622 [2024-11-20 06:43:59.644296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.622 [2024-11-20 06:43:59.644305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.622 [2024-11-20 06:43:59.644311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.622 [2024-11-20 06:43:59.644315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.622 [2024-11-20 06:43:59.644326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-11-20 06:43:59.654300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.622 [2024-11-20 06:43:59.654385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.622 [2024-11-20 06:43:59.654396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.622 [2024-11-20 06:43:59.654401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.622 [2024-11-20 06:43:59.654405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.622 [2024-11-20 06:43:59.654416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-11-20 06:43:59.664256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.622 [2024-11-20 06:43:59.664313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.622 [2024-11-20 06:43:59.664323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.622 [2024-11-20 06:43:59.664328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.622 [2024-11-20 06:43:59.664333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.622 [2024-11-20 06:43:59.664344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-11-20 06:43:59.674305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.622 [2024-11-20 06:43:59.674356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.622 [2024-11-20 06:43:59.674366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.622 [2024-11-20 06:43:59.674372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.622 [2024-11-20 06:43:59.674377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.622 [2024-11-20 06:43:59.674387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-11-20 06:43:59.684353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.622 [2024-11-20 06:43:59.684404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.622 [2024-11-20 06:43:59.684414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.622 [2024-11-20 06:43:59.684420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.622 [2024-11-20 06:43:59.684424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.622 [2024-11-20 06:43:59.684435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-11-20 06:43:59.694410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.622 [2024-11-20 06:43:59.694458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.622 [2024-11-20 06:43:59.694468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.622 [2024-11-20 06:43:59.694473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.622 [2024-11-20 06:43:59.694477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.622 [2024-11-20 06:43:59.694488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-11-20 06:43:59.704403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.622 [2024-11-20 06:43:59.704472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.622 [2024-11-20 06:43:59.704482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.622 [2024-11-20 06:43:59.704487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.622 [2024-11-20 06:43:59.704491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.622 [2024-11-20 06:43:59.704502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-11-20 06:43:59.714435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.622 [2024-11-20 06:43:59.714482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.622 [2024-11-20 06:43:59.714492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.622 [2024-11-20 06:43:59.714501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.622 [2024-11-20 06:43:59.714505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.622 [2024-11-20 06:43:59.714516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.622 [2024-11-20 06:43:59.724487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.622 [2024-11-20 06:43:59.724578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.622 [2024-11-20 06:43:59.724588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.622 [2024-11-20 06:43:59.724593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.622 [2024-11-20 06:43:59.724597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.622 [2024-11-20 06:43:59.724608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.622 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.734518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.734572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.734581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.734587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.734591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.734602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.744625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.744678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.744688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.744693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.744698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.744708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.754459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.754505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.754515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.754520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.754524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.754537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.764633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.764688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.764697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.764702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.764706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.764717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.774665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.774719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.774729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.774734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.774738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.774748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.784657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.784707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.784717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.784724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.784729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.784740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.794677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.794718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.794728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.794733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.794738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.794748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.804587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.804637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.804647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.804652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.804657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.804667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.814737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.814793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.814803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.814808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.814813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.814823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.824756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.824803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.824813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.824818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.824823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.824833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.834796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.834860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.834869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.834875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.834879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.834890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.844810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.844857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.844870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.844875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.844879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.844890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.854836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.854920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.854930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.854935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.623 [2024-11-20 06:43:59.854939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.623 [2024-11-20 06:43:59.854949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.623 qpair failed and we were unable to recover it. 00:33:39.623 [2024-11-20 06:43:59.864828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.623 [2024-11-20 06:43:59.864889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.623 [2024-11-20 06:43:59.864898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.623 [2024-11-20 06:43:59.864904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.624 [2024-11-20 06:43:59.864909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.624 [2024-11-20 06:43:59.864919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-11-20 06:43:59.874881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.624 [2024-11-20 06:43:59.874926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.624 [2024-11-20 06:43:59.874935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.624 [2024-11-20 06:43:59.874941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.624 [2024-11-20 06:43:59.874945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.624 [2024-11-20 06:43:59.874955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-11-20 06:43:59.884933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.624 [2024-11-20 06:43:59.884991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.624 [2024-11-20 06:43:59.885000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.624 [2024-11-20 06:43:59.885005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.624 [2024-11-20 06:43:59.885013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.624 [2024-11-20 06:43:59.885024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.624 [2024-11-20 06:43:59.894834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.624 [2024-11-20 06:43:59.894884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.624 [2024-11-20 06:43:59.894894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.624 [2024-11-20 06:43:59.894899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.624 [2024-11-20 06:43:59.894904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.624 [2024-11-20 06:43:59.894914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.624 qpair failed and we were unable to recover it. 00:33:39.886 [2024-11-20 06:43:59.904934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:43:59.904977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:43:59.904986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:43:59.904991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:43:59.904996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:43:59.905007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:43:59.914999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:43:59.915046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:43:59.915056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:43:59.915062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:43:59.915066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:43:59.915077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:43:59.925037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:43:59.925115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:43:59.925124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:43:59.925130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:43:59.925135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:43:59.925145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:43:59.935079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:43:59.935130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:43:59.935140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:43:59.935145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:43:59.935150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:43:59.935162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:43:59.945043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:43:59.945093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:43:59.945103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:43:59.945109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:43:59.945113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:43:59.945124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:43:59.955119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:43:59.955172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:43:59.955182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:43:59.955187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:43:59.955192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:43:59.955202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:43:59.965149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:43:59.965203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:43:59.965213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:43:59.965218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:43:59.965223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:43:59.965234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:43:59.975196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:43:59.975249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:43:59.975262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:43:59.975267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:43:59.975271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:43:59.975282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:43:59.985178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:43:59.985218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:43:59.985228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:43:59.985233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:43:59.985238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:43:59.985248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:43:59.995105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:43:59.995152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:43:59.995164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:43:59.995170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:43:59.995174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:43:59.995185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:44:00.005257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:44:00.005309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:44:00.005322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:44:00.005328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:44:00.005332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:44:00.005344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:44:00.015301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:44:00.015349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:44:00.015360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:44:00.015365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.887 [2024-11-20 06:44:00.015373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.887 [2024-11-20 06:44:00.015384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.887 qpair failed and we were unable to recover it. 00:33:39.887 [2024-11-20 06:44:00.025255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.887 [2024-11-20 06:44:00.025294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.887 [2024-11-20 06:44:00.025304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.887 [2024-11-20 06:44:00.025310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.025314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.025325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.035352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.035401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.035410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.035416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.035421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.035431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.045314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.045369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.045380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.045386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.045390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.045401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.055467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.055527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.055537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.055542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.055547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.055558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.065387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.065430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.065440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.065446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.065451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.065462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.075334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.075379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.075389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.075394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.075399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.075409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.085407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.085501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.085512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.085517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.085522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.085533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.095525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.095575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.095586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.095591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.095596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.095607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.105510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.105550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.105562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.105567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.105572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.105583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.115576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.115622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.115632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.115638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.115642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.115653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.125609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.125662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.125672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.125677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.125682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.125693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.135636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.135684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.135694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.135699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.135704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.135714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.145627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.888 [2024-11-20 06:44:00.145667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.888 [2024-11-20 06:44:00.145677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.888 [2024-11-20 06:44:00.145684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.888 [2024-11-20 06:44:00.145689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.888 [2024-11-20 06:44:00.145699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.888 qpair failed and we were unable to recover it. 00:33:39.888 [2024-11-20 06:44:00.155660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:39.889 [2024-11-20 06:44:00.155706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:39.889 [2024-11-20 06:44:00.155716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:39.889 [2024-11-20 06:44:00.155721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.889 [2024-11-20 06:44:00.155725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:39.889 [2024-11-20 06:44:00.155736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:39.889 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-20 06:44:00.165725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.152 [2024-11-20 06:44:00.165793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.152 [2024-11-20 06:44:00.165802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.152 [2024-11-20 06:44:00.165807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.152 [2024-11-20 06:44:00.165812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.152 [2024-11-20 06:44:00.165822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-20 06:44:00.175728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.152 [2024-11-20 06:44:00.175778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.152 [2024-11-20 06:44:00.175789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.152 [2024-11-20 06:44:00.175794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.152 [2024-11-20 06:44:00.175799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.152 [2024-11-20 06:44:00.175809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-20 06:44:00.185724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.152 [2024-11-20 06:44:00.185766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.152 [2024-11-20 06:44:00.185776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.152 [2024-11-20 06:44:00.185781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.152 [2024-11-20 06:44:00.185786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.152 [2024-11-20 06:44:00.185797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-20 06:44:00.195798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.152 [2024-11-20 06:44:00.195842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.152 [2024-11-20 06:44:00.195852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.152 [2024-11-20 06:44:00.195858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.152 [2024-11-20 06:44:00.195862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.152 [2024-11-20 06:44:00.195873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-20 06:44:00.205835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.152 [2024-11-20 06:44:00.205900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.152 [2024-11-20 06:44:00.205919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.152 [2024-11-20 06:44:00.205925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.152 [2024-11-20 06:44:00.205930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.152 [2024-11-20 06:44:00.205945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.152 qpair failed and we were unable to recover it. 00:33:40.152 [2024-11-20 06:44:00.215831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.153 [2024-11-20 06:44:00.215898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.153 [2024-11-20 06:44:00.215910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.153 [2024-11-20 06:44:00.215916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.153 [2024-11-20 06:44:00.215920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.153 [2024-11-20 06:44:00.215932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.153 qpair failed and we were unable to recover it. 00:33:40.153 [2024-11-20 06:44:00.225811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.153 [2024-11-20 06:44:00.225850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.153 [2024-11-20 06:44:00.225861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.153 [2024-11-20 06:44:00.225867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.153 [2024-11-20 06:44:00.225871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.153 [2024-11-20 06:44:00.225882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.153 qpair failed and we were unable to recover it. 00:33:40.153 [2024-11-20 06:44:00.235764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.153 [2024-11-20 06:44:00.235823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.153 [2024-11-20 06:44:00.235834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.153 [2024-11-20 06:44:00.235839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.153 [2024-11-20 06:44:00.235845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.153 [2024-11-20 06:44:00.235855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.153 qpair failed and we were unable to recover it. 00:33:40.153 [2024-11-20 06:44:00.245906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.153 [2024-11-20 06:44:00.245956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.153 [2024-11-20 06:44:00.245966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.153 [2024-11-20 06:44:00.245972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.153 [2024-11-20 06:44:00.245977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.153 [2024-11-20 06:44:00.245988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.153 qpair failed and we were unable to recover it. 00:33:40.153 [2024-11-20 06:44:00.255971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.153 [2024-11-20 06:44:00.256027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.153 [2024-11-20 06:44:00.256047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.153 [2024-11-20 06:44:00.256053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.153 [2024-11-20 06:44:00.256058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.153 [2024-11-20 06:44:00.256073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.153 qpair failed and we were unable to recover it. 00:33:40.153 [2024-11-20 06:44:00.265937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.153 [2024-11-20 06:44:00.266004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.153 [2024-11-20 06:44:00.266016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.153 [2024-11-20 06:44:00.266021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.153 [2024-11-20 06:44:00.266027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.153 [2024-11-20 06:44:00.266038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.153 qpair failed and we were unable to recover it. 00:33:40.153 [2024-11-20 06:44:00.276031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.153 [2024-11-20 06:44:00.276078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.153 [2024-11-20 06:44:00.276089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.153 [2024-11-20 06:44:00.276098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.153 [2024-11-20 06:44:00.276102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.153 [2024-11-20 06:44:00.276114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.153 qpair failed and we were unable to recover it. 00:33:40.153 [2024-11-20 06:44:00.286061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.153 [2024-11-20 06:44:00.286110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.153 [2024-11-20 06:44:00.286120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.153 [2024-11-20 06:44:00.286126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.153 [2024-11-20 06:44:00.286130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.153 [2024-11-20 06:44:00.286141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.153 qpair failed and we were unable to recover it. 00:33:40.153 [2024-11-20 06:44:00.295962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.153 [2024-11-20 06:44:00.296024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.153 [2024-11-20 06:44:00.296034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.153 [2024-11-20 06:44:00.296040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.153 [2024-11-20 06:44:00.296044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.153 [2024-11-20 06:44:00.296055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.153 qpair failed and we were unable to recover it. 00:33:40.153 [2024-11-20 06:44:00.306046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.153 [2024-11-20 06:44:00.306110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.153 [2024-11-20 06:44:00.306120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.306126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.306130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.306141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.316118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.316168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.316179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.316184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.316189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.316202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.326181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.326236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.326246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.326251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.326256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.326267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.336191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.336246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.336256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.336261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.336265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.336276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.346172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.346215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.346226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.346231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.346236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.346247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.356219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.356268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.356278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.356283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.356288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.356298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.366261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.366314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.366325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.366330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.366335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.366345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.376305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.376355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.376366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.376371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.376376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.376386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.386258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.386299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.386310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.386316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.386320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.386331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.396349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.396393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.396404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.396409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.396414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.396424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.406357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.406408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.406421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.406426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.406431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.406442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.416400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.416450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.154 [2024-11-20 06:44:00.416461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.154 [2024-11-20 06:44:00.416466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.154 [2024-11-20 06:44:00.416471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.154 [2024-11-20 06:44:00.416482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.154 qpair failed and we were unable to recover it. 00:33:40.154 [2024-11-20 06:44:00.426401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.154 [2024-11-20 06:44:00.426445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.155 [2024-11-20 06:44:00.426455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.155 [2024-11-20 06:44:00.426460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.155 [2024-11-20 06:44:00.426465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.155 [2024-11-20 06:44:00.426476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.155 qpair failed and we were unable to recover it. 00:33:40.418 [2024-11-20 06:44:00.436460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.418 [2024-11-20 06:44:00.436511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.418 [2024-11-20 06:44:00.436522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.418 [2024-11-20 06:44:00.436527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.418 [2024-11-20 06:44:00.436532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.418 [2024-11-20 06:44:00.436543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.418 qpair failed and we were unable to recover it. 00:33:40.418 [2024-11-20 06:44:00.446494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.418 [2024-11-20 06:44:00.446542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.418 [2024-11-20 06:44:00.446552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.418 [2024-11-20 06:44:00.446558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.418 [2024-11-20 06:44:00.446565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.418 [2024-11-20 06:44:00.446576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.418 qpair failed and we were unable to recover it. 00:33:40.418 [2024-11-20 06:44:00.456408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.418 [2024-11-20 06:44:00.456463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.418 [2024-11-20 06:44:00.456473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.418 [2024-11-20 06:44:00.456478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.418 [2024-11-20 06:44:00.456482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.418 [2024-11-20 06:44:00.456493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.418 qpair failed and we were unable to recover it. 00:33:40.418 [2024-11-20 06:44:00.466495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.418 [2024-11-20 06:44:00.466539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.418 [2024-11-20 06:44:00.466549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.418 [2024-11-20 06:44:00.466554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.418 [2024-11-20 06:44:00.466559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.418 [2024-11-20 06:44:00.466569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.418 qpair failed and we were unable to recover it. 00:33:40.418 [2024-11-20 06:44:00.476582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.418 [2024-11-20 06:44:00.476629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.418 [2024-11-20 06:44:00.476639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.418 [2024-11-20 06:44:00.476644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.418 [2024-11-20 06:44:00.476648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.418 [2024-11-20 06:44:00.476659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.418 qpair failed and we were unable to recover it. 00:33:40.418 [2024-11-20 06:44:00.486580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.418 [2024-11-20 06:44:00.486670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.418 [2024-11-20 06:44:00.486681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.418 [2024-11-20 06:44:00.486687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.418 [2024-11-20 06:44:00.486692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.418 [2024-11-20 06:44:00.486702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.418 qpair failed and we were unable to recover it. 00:33:40.418 [2024-11-20 06:44:00.496641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.418 [2024-11-20 06:44:00.496694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.418 [2024-11-20 06:44:00.496704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.418 [2024-11-20 06:44:00.496709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.418 [2024-11-20 06:44:00.496714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.418 [2024-11-20 06:44:00.496724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.418 qpair failed and we were unable to recover it. 00:33:40.418 [2024-11-20 06:44:00.506583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.418 [2024-11-20 06:44:00.506624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.418 [2024-11-20 06:44:00.506636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.418 [2024-11-20 06:44:00.506641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.418 [2024-11-20 06:44:00.506646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.418 [2024-11-20 06:44:00.506657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.418 qpair failed and we were unable to recover it. 00:33:40.418 [2024-11-20 06:44:00.516685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.418 [2024-11-20 06:44:00.516729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.418 [2024-11-20 06:44:00.516740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.418 [2024-11-20 06:44:00.516745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.418 [2024-11-20 06:44:00.516750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.418 [2024-11-20 06:44:00.516761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.418 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.526587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.526635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.526645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.526650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.526656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.526666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.536771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.536833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.536846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.536851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.536856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.536868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.546730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.546779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.546789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.546794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.546799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.546810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.556804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.556866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.556876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.556881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.556886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.556896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.566792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.566844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.566855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.566860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.566865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.566875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.576863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.576916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.576926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.576931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.576939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.576951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.586819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.586862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.586873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.586878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.586882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.586893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.596916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.596967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.596977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.596982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.596987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.596997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.606943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.606991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.607002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.607007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.607011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.607022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.616980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.617030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.617040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.617046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.617051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.617061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.626944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.626987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.626997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.627003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.627007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.627018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.637014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.637063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.637073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.637079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.637083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.637094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.419 [2024-11-20 06:44:00.647046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.419 [2024-11-20 06:44:00.647140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.419 [2024-11-20 06:44:00.647151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.419 [2024-11-20 06:44:00.647157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.419 [2024-11-20 06:44:00.647165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.419 [2024-11-20 06:44:00.647176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.419 qpair failed and we were unable to recover it. 00:33:40.420 [2024-11-20 06:44:00.657064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.420 [2024-11-20 06:44:00.657115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.420 [2024-11-20 06:44:00.657125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.420 [2024-11-20 06:44:00.657131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.420 [2024-11-20 06:44:00.657136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.420 [2024-11-20 06:44:00.657146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.420 qpair failed and we were unable to recover it. 00:33:40.420 [2024-11-20 06:44:00.667011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.420 [2024-11-20 06:44:00.667055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.420 [2024-11-20 06:44:00.667069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.420 [2024-11-20 06:44:00.667074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.420 [2024-11-20 06:44:00.667079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.420 [2024-11-20 06:44:00.667090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.420 qpair failed and we were unable to recover it. 00:33:40.420 [2024-11-20 06:44:00.677086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.420 [2024-11-20 06:44:00.677135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.420 [2024-11-20 06:44:00.677146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.420 [2024-11-20 06:44:00.677151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.420 [2024-11-20 06:44:00.677156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.420 [2024-11-20 06:44:00.677171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.420 qpair failed and we were unable to recover it. 00:33:40.420 [2024-11-20 06:44:00.687197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.420 [2024-11-20 06:44:00.687247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.420 [2024-11-20 06:44:00.687258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.420 [2024-11-20 06:44:00.687263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.420 [2024-11-20 06:44:00.687268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.420 [2024-11-20 06:44:00.687279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.420 qpair failed and we were unable to recover it. 00:33:40.684 [2024-11-20 06:44:00.697063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.684 [2024-11-20 06:44:00.697130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.684 [2024-11-20 06:44:00.697140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.684 [2024-11-20 06:44:00.697145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.684 [2024-11-20 06:44:00.697150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.684 [2024-11-20 06:44:00.697165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.684 qpair failed and we were unable to recover it. 00:33:40.684 [2024-11-20 06:44:00.707155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.684 [2024-11-20 06:44:00.707206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.684 [2024-11-20 06:44:00.707216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.684 [2024-11-20 06:44:00.707225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.684 [2024-11-20 06:44:00.707230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.684 [2024-11-20 06:44:00.707240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.684 qpair failed and we were unable to recover it. 00:33:40.684 [2024-11-20 06:44:00.717242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.684 [2024-11-20 06:44:00.717312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.684 [2024-11-20 06:44:00.717323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.684 [2024-11-20 06:44:00.717330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.684 [2024-11-20 06:44:00.717335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.684 [2024-11-20 06:44:00.717345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.684 qpair failed and we were unable to recover it. 00:33:40.684 [2024-11-20 06:44:00.727272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.684 [2024-11-20 06:44:00.727323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.684 [2024-11-20 06:44:00.727333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.684 [2024-11-20 06:44:00.727339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.684 [2024-11-20 06:44:00.727343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.684 [2024-11-20 06:44:00.727354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.684 qpair failed and we were unable to recover it. 00:33:40.684 [2024-11-20 06:44:00.737295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.684 [2024-11-20 06:44:00.737344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.684 [2024-11-20 06:44:00.737354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.684 [2024-11-20 06:44:00.737359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.684 [2024-11-20 06:44:00.737364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.684 [2024-11-20 06:44:00.737374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.684 qpair failed and we were unable to recover it. 00:33:40.684 [2024-11-20 06:44:00.747288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.684 [2024-11-20 06:44:00.747335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.684 [2024-11-20 06:44:00.747346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.684 [2024-11-20 06:44:00.747351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.684 [2024-11-20 06:44:00.747355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.684 [2024-11-20 06:44:00.747366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.684 qpair failed and we were unable to recover it. 00:33:40.684 [2024-11-20 06:44:00.757367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.684 [2024-11-20 06:44:00.757414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.684 [2024-11-20 06:44:00.757424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.684 [2024-11-20 06:44:00.757429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.684 [2024-11-20 06:44:00.757434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.684 [2024-11-20 06:44:00.757445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.684 qpair failed and we were unable to recover it. 00:33:40.684 [2024-11-20 06:44:00.767402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.684 [2024-11-20 06:44:00.767484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.684 [2024-11-20 06:44:00.767496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.684 [2024-11-20 06:44:00.767501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.684 [2024-11-20 06:44:00.767506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.684 [2024-11-20 06:44:00.767517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.684 qpair failed and we were unable to recover it. 00:33:40.684 [2024-11-20 06:44:00.777403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.684 [2024-11-20 06:44:00.777449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.684 [2024-11-20 06:44:00.777460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.684 [2024-11-20 06:44:00.777465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.684 [2024-11-20 06:44:00.777470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.684 [2024-11-20 06:44:00.777480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.684 qpair failed and we were unable to recover it. 00:33:40.684 [2024-11-20 06:44:00.787421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.684 [2024-11-20 06:44:00.787464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.684 [2024-11-20 06:44:00.787481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.684 [2024-11-20 06:44:00.787486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.787491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.787506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.797466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.797548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.797560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.797565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.797570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.797581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.807477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.807530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.807540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.807545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.807551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.807561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.817533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.817596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.817607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.817612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.817617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.817629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.827515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.827558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.827568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.827573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.827578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.827589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.837566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.837611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.837622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.837630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.837635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.837645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.847621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.847671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.847681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.847686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.847691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.847701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.857656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.857709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.857720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.857725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.857730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.857740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.867616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.867656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.867667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.867672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.867677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.867687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.877678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.877723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.877733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.877739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.877743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.877756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.887708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.887760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.887772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.887777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.887782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.887793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.897732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.897805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.897817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.897822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.897827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.897838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.907689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.907730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.907741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.907747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.685 [2024-11-20 06:44:00.907751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.685 [2024-11-20 06:44:00.907762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.685 qpair failed and we were unable to recover it. 00:33:40.685 [2024-11-20 06:44:00.917646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.685 [2024-11-20 06:44:00.917691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.685 [2024-11-20 06:44:00.917703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.685 [2024-11-20 06:44:00.917709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.686 [2024-11-20 06:44:00.917714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.686 [2024-11-20 06:44:00.917724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.686 qpair failed and we were unable to recover it. 00:33:40.686 [2024-11-20 06:44:00.927785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.686 [2024-11-20 06:44:00.927846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.686 [2024-11-20 06:44:00.927857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.686 [2024-11-20 06:44:00.927862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.686 [2024-11-20 06:44:00.927867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.686 [2024-11-20 06:44:00.927878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.686 qpair failed and we were unable to recover it. 00:33:40.686 [2024-11-20 06:44:00.937845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.686 [2024-11-20 06:44:00.937899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.686 [2024-11-20 06:44:00.937911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.686 [2024-11-20 06:44:00.937916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.686 [2024-11-20 06:44:00.937921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.686 [2024-11-20 06:44:00.937932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.686 qpair failed and we were unable to recover it. 00:33:40.686 [2024-11-20 06:44:00.947688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.686 [2024-11-20 06:44:00.947729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.686 [2024-11-20 06:44:00.947739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.686 [2024-11-20 06:44:00.947744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.686 [2024-11-20 06:44:00.947749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.686 [2024-11-20 06:44:00.947759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.686 qpair failed and we were unable to recover it. 00:33:40.686 [2024-11-20 06:44:00.957912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.686 [2024-11-20 06:44:00.957980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.686 [2024-11-20 06:44:00.957990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.686 [2024-11-20 06:44:00.957997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.686 [2024-11-20 06:44:00.958002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.686 [2024-11-20 06:44:00.958012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.686 qpair failed and we were unable to recover it. 00:33:40.949 [2024-11-20 06:44:00.967930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.949 [2024-11-20 06:44:00.967981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.949 [2024-11-20 06:44:00.967995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.949 [2024-11-20 06:44:00.968001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.949 [2024-11-20 06:44:00.968005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.949 [2024-11-20 06:44:00.968016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.949 qpair failed and we were unable to recover it. 00:33:40.949 [2024-11-20 06:44:00.977964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.949 [2024-11-20 06:44:00.978012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.949 [2024-11-20 06:44:00.978022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.949 [2024-11-20 06:44:00.978027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.949 [2024-11-20 06:44:00.978031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.949 [2024-11-20 06:44:00.978042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.949 qpair failed and we were unable to recover it. 00:33:40.949 [2024-11-20 06:44:00.987930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.949 [2024-11-20 06:44:00.987969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.949 [2024-11-20 06:44:00.987980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.949 [2024-11-20 06:44:00.987986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.949 [2024-11-20 06:44:00.987991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.949 [2024-11-20 06:44:00.988001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.949 qpair failed and we were unable to recover it. 00:33:40.949 [2024-11-20 06:44:00.997971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.949 [2024-11-20 06:44:00.998018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.949 [2024-11-20 06:44:00.998029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.949 [2024-11-20 06:44:00.998034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.949 [2024-11-20 06:44:00.998039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.949 [2024-11-20 06:44:00.998050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.949 qpair failed and we were unable to recover it. 00:33:40.949 [2024-11-20 06:44:01.008032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.949 [2024-11-20 06:44:01.008082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.949 [2024-11-20 06:44:01.008092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.949 [2024-11-20 06:44:01.008098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.949 [2024-11-20 06:44:01.008106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.949 [2024-11-20 06:44:01.008116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.949 qpair failed and we were unable to recover it. 00:33:40.949 [2024-11-20 06:44:01.018028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.949 [2024-11-20 06:44:01.018079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.949 [2024-11-20 06:44:01.018090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.949 [2024-11-20 06:44:01.018095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.949 [2024-11-20 06:44:01.018100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.949 [2024-11-20 06:44:01.018110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.949 qpair failed and we were unable to recover it. 00:33:40.949 [2024-11-20 06:44:01.028016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.949 [2024-11-20 06:44:01.028064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.949 [2024-11-20 06:44:01.028074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.949 [2024-11-20 06:44:01.028080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.949 [2024-11-20 06:44:01.028085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.949 [2024-11-20 06:44:01.028095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.949 qpair failed and we were unable to recover it. 00:33:40.949 [2024-11-20 06:44:01.038100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.949 [2024-11-20 06:44:01.038185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.949 [2024-11-20 06:44:01.038196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.949 [2024-11-20 06:44:01.038202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.949 [2024-11-20 06:44:01.038208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.949 [2024-11-20 06:44:01.038220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.949 qpair failed and we were unable to recover it. 00:33:40.949 [2024-11-20 06:44:01.048134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.949 [2024-11-20 06:44:01.048192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.949 [2024-11-20 06:44:01.048202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.949 [2024-11-20 06:44:01.048208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.949 [2024-11-20 06:44:01.048212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.949 [2024-11-20 06:44:01.048223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.949 qpair failed and we were unable to recover it. 00:33:40.949 [2024-11-20 06:44:01.058130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.949 [2024-11-20 06:44:01.058186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.949 [2024-11-20 06:44:01.058197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.949 [2024-11-20 06:44:01.058202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.949 [2024-11-20 06:44:01.058207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.949 [2024-11-20 06:44:01.058218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.949 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.068148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.068192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.068202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.068208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.068212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.068223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.078253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.078316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.078326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.078332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.078337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.078348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.088248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.088311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.088321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.088326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.088331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.088342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.098222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.098271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.098284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.098289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.098294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.098304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.108249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.108295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.108306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.108311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.108316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.108326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.118227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.118275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.118285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.118291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.118296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.118306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.128382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.128432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.128442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.128447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.128452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.128463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.138410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.138495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.138506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.138511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.138520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.138530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.148375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.148417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.148427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.148432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.148437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.148450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.158417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.158461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.158471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.158477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.158482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.158492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.168350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.168449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.168460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.168465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.168470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.168481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.178524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.178583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.178593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.178599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.178603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.178614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.950 qpair failed and we were unable to recover it. 00:33:40.950 [2024-11-20 06:44:01.188507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.950 [2024-11-20 06:44:01.188549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.950 [2024-11-20 06:44:01.188559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.950 [2024-11-20 06:44:01.188565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.950 [2024-11-20 06:44:01.188570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.950 [2024-11-20 06:44:01.188580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.951 qpair failed and we were unable to recover it. 00:33:40.951 [2024-11-20 06:44:01.198574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.951 [2024-11-20 06:44:01.198621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.951 [2024-11-20 06:44:01.198631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.951 [2024-11-20 06:44:01.198636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.951 [2024-11-20 06:44:01.198641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.951 [2024-11-20 06:44:01.198651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.951 qpair failed and we were unable to recover it. 00:33:40.951 [2024-11-20 06:44:01.208559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.951 [2024-11-20 06:44:01.208606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.951 [2024-11-20 06:44:01.208616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.951 [2024-11-20 06:44:01.208621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.951 [2024-11-20 06:44:01.208626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.951 [2024-11-20 06:44:01.208636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.951 qpair failed and we were unable to recover it. 00:33:40.951 [2024-11-20 06:44:01.218624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:40.951 [2024-11-20 06:44:01.218672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:40.951 [2024-11-20 06:44:01.218683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:40.951 [2024-11-20 06:44:01.218689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:40.951 [2024-11-20 06:44:01.218694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:40.951 [2024-11-20 06:44:01.218704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.951 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.228617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.228659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.228672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.228678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.228682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.213 [2024-11-20 06:44:01.228693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.238664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.238711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.238721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.238726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.238731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.213 [2024-11-20 06:44:01.238742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.248672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.248721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.248731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.248736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.248741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.213 [2024-11-20 06:44:01.248752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.258746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.258795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.258805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.258810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.258815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.213 [2024-11-20 06:44:01.258825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.268721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.268766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.268776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.268784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.268789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.213 [2024-11-20 06:44:01.268799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.278781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.278828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.278837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.278843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.278848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.213 [2024-11-20 06:44:01.278858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.288830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.288904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.288914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.288920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.288926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.213 [2024-11-20 06:44:01.288937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.298843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.298892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.298903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.298908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.298913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.213 [2024-11-20 06:44:01.298923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.308828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.308881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.308900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.308907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.308912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.213 [2024-11-20 06:44:01.308931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.318889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.318944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.318955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.318961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.318966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.213 [2024-11-20 06:44:01.318977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.213 qpair failed and we were unable to recover it. 00:33:41.213 [2024-11-20 06:44:01.328960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.213 [2024-11-20 06:44:01.329030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.213 [2024-11-20 06:44:01.329049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.213 [2024-11-20 06:44:01.329057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.213 [2024-11-20 06:44:01.329062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.329077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.338971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.339022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.339033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.339038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.339043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.339055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.348800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.348842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.348852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.348858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.348863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.348874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.358969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.359019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.359030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.359035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.359040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.359051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.369039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.369090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.369100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.369106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.369110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.369121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.379069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.379118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.379129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.379134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.379139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.379149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.389029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.389071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.389081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.389087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.389092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.389103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.399102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.399195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.399207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.399216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.399222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.399233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.409134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.409186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.409197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.409208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.409213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.409223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.419173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.419226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.419236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.419241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.419246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.419257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.429056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.429100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.429110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.429115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.429120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.429130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.439226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.439298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.439308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.439314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.439319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.439333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.449231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.449281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.449291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.214 [2024-11-20 06:44:01.449296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.214 [2024-11-20 06:44:01.449301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.214 [2024-11-20 06:44:01.449312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.214 qpair failed and we were unable to recover it. 00:33:41.214 [2024-11-20 06:44:01.459270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.214 [2024-11-20 06:44:01.459320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.214 [2024-11-20 06:44:01.459330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.215 [2024-11-20 06:44:01.459335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.215 [2024-11-20 06:44:01.459340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.215 [2024-11-20 06:44:01.459351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-11-20 06:44:01.469216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.215 [2024-11-20 06:44:01.469260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.215 [2024-11-20 06:44:01.469270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.215 [2024-11-20 06:44:01.469275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.215 [2024-11-20 06:44:01.469280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.215 [2024-11-20 06:44:01.469290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.215 [2024-11-20 06:44:01.479295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.215 [2024-11-20 06:44:01.479340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.215 [2024-11-20 06:44:01.479350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.215 [2024-11-20 06:44:01.479355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.215 [2024-11-20 06:44:01.479360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.215 [2024-11-20 06:44:01.479371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.215 qpair failed and we were unable to recover it. 00:33:41.476 [2024-11-20 06:44:01.489355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.476 [2024-11-20 06:44:01.489407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.476 [2024-11-20 06:44:01.489417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.476 [2024-11-20 06:44:01.489423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.476 [2024-11-20 06:44:01.489427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.476 [2024-11-20 06:44:01.489438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.476 qpair failed and we were unable to recover it. 00:33:41.476 [2024-11-20 06:44:01.499397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.476 [2024-11-20 06:44:01.499450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.476 [2024-11-20 06:44:01.499460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.476 [2024-11-20 06:44:01.499465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.476 [2024-11-20 06:44:01.499470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.476 [2024-11-20 06:44:01.499481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.476 qpair failed and we were unable to recover it. 00:33:41.476 [2024-11-20 06:44:01.509379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.476 [2024-11-20 06:44:01.509418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.509429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.509435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.509440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.509450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.519308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.519386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.519397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.519402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.519408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.519419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.529457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.529505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.529519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.529524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.529529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.529539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.539512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.539561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.539572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.539577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.539581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.539592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.549488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.549531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.549541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.549546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.549551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.549562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.559553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.559599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.559609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.559614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.559619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.559630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.569559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.569608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.569617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.569623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.569630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.569641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.579619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.579673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.579683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.579689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.579693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.579704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.589594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.589645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.589655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.589660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.589665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.589675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.599660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.599708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.599718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.599723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.599728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.599738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.609663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.609711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.609722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.609727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.609732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.609742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.619735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.619780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.619791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.619796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.619800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.477 [2024-11-20 06:44:01.619811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.477 qpair failed and we were unable to recover it. 00:33:41.477 [2024-11-20 06:44:01.629755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.477 [2024-11-20 06:44:01.629827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.477 [2024-11-20 06:44:01.629837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.477 [2024-11-20 06:44:01.629843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.477 [2024-11-20 06:44:01.629847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.629859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.639781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.639831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.639841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.639846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.639850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.639861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.649810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.649861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.649871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.649876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.649880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.649891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.659835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.659884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.659897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.659902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.659907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.659918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.669797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.669841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.669851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.669856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.669861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.669871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.679877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.679924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.679934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.679939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.679944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.679954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.689918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.689971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.689991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.689997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.690002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.690017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.699953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.700039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.700060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.700067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.700076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.700091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.709910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.709956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.709967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.709973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.709977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.709989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.719976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.720025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.720036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.720041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.720046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.720057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.730055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.730120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.730130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.730136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.730141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.730152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.740046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.740129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.740141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.740146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.740151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.740165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.478 [2024-11-20 06:44:01.749997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.478 [2024-11-20 06:44:01.750039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.478 [2024-11-20 06:44:01.750049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.478 [2024-11-20 06:44:01.750055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.478 [2024-11-20 06:44:01.750060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.478 [2024-11-20 06:44:01.750071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.478 qpair failed and we were unable to recover it. 00:33:41.740 [2024-11-20 06:44:01.760070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.740 [2024-11-20 06:44:01.760114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.760125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.760131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.760136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.760147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.770118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.770170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.770180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.770186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.770190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.770202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.780153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.780209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.780219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.780224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.780229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.780239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.790097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.790138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.790151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.790157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.790165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.790176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.800212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.800265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.800275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.800280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.800285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.800296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.810238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.810292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.810301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.810307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.810312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.810322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.820264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.820311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.820321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.820327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.820331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.820342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.830234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.830282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.830292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.830300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.830305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.830315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.840320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.840364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.840374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.840379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.840384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.840395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.850402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.850452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.850462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.850467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.850471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.850482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.860397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.860443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.860452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.860457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.860462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.860473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.870412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.870484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.870493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.870500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.870505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.870518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.880447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.741 [2024-11-20 06:44:01.880497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.741 [2024-11-20 06:44:01.880507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.741 [2024-11-20 06:44:01.880513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.741 [2024-11-20 06:44:01.880517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.741 [2024-11-20 06:44:01.880528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.741 qpair failed and we were unable to recover it. 00:33:41.741 [2024-11-20 06:44:01.890452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.890501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.890511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.890516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.890521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.890532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:01.900501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.900552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.900562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.900567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.900572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.900583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:01.910455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.910503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.910513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.910518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.910523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.910533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:01.920541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.920592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.920602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.920607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.920612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.920622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:01.930573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.930665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.930675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.930681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.930686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.930696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:01.940615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.940682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.940692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.940699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.940706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.940718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:01.950674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.950719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.950730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.950735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.950740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.950751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:01.960606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.960655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.960665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.960674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.960678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.960689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:01.970741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.970790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.970800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.970805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.970810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.970820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:01.980732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.980779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.980789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.980794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.980799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.980809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:01.990710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:01.990751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:01.990761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:01.990766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:01.990771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:01.990781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:02.000783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:02.000830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:02.000840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:02.000845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:02.000849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:02.000868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:41.742 [2024-11-20 06:44:02.010681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:41.742 [2024-11-20 06:44:02.010742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:41.742 [2024-11-20 06:44:02.010752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:41.742 [2024-11-20 06:44:02.010757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:41.742 [2024-11-20 06:44:02.010764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:41.742 [2024-11-20 06:44:02.010774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.742 qpair failed and we were unable to recover it. 00:33:42.004 [2024-11-20 06:44:02.020829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.004 [2024-11-20 06:44:02.020879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.004 [2024-11-20 06:44:02.020889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.004 [2024-11-20 06:44:02.020894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.004 [2024-11-20 06:44:02.020898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.004 [2024-11-20 06:44:02.020909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.004 qpair failed and we were unable to recover it. 00:33:42.004 [2024-11-20 06:44:02.030813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.004 [2024-11-20 06:44:02.030861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.004 [2024-11-20 06:44:02.030880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.004 [2024-11-20 06:44:02.030886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.004 [2024-11-20 06:44:02.030892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.004 [2024-11-20 06:44:02.030906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.004 qpair failed and we were unable to recover it. 00:33:42.004 [2024-11-20 06:44:02.040814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.004 [2024-11-20 06:44:02.040859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.004 [2024-11-20 06:44:02.040870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.004 [2024-11-20 06:44:02.040876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.004 [2024-11-20 06:44:02.040882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.004 [2024-11-20 06:44:02.040893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.004 qpair failed and we were unable to recover it. 00:33:42.004 [2024-11-20 06:44:02.050925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.004 [2024-11-20 06:44:02.050976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.004 [2024-11-20 06:44:02.050994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.004 [2024-11-20 06:44:02.051001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.004 [2024-11-20 06:44:02.051006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.004 [2024-11-20 06:44:02.051020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.004 qpair failed and we were unable to recover it. 00:33:42.004 [2024-11-20 06:44:02.060823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.004 [2024-11-20 06:44:02.060879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.004 [2024-11-20 06:44:02.060891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.004 [2024-11-20 06:44:02.060896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.004 [2024-11-20 06:44:02.060901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.004 [2024-11-20 06:44:02.060913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.004 qpair failed and we were unable to recover it. 00:33:42.004 [2024-11-20 06:44:02.070905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.004 [2024-11-20 06:44:02.070961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.004 [2024-11-20 06:44:02.070971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.004 [2024-11-20 06:44:02.070976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.004 [2024-11-20 06:44:02.070981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.070992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.080983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.081054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.081064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.081069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.081074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.081084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.090911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.090962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.090974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.090979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.090984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.090994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.101058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.101126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.101136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.101141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.101145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.101156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.111004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.111041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.111051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.111056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.111061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.111071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.121056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.121134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.121144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.121150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.121154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.121168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.131135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.131192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.131202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.131207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.131215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.131225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.141150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.141206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.141216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.141221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.141226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.141236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.151061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.151105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.151116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.151121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.151125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.151135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.161176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.161224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.161234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.161239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.161244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.161254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.171246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.171296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.171306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.171311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.171316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.171327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.181306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.181365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.181375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.181380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.181385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.181395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.191259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.191341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.191351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.191356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.005 [2024-11-20 06:44:02.191360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.005 [2024-11-20 06:44:02.191371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.005 qpair failed and we were unable to recover it. 00:33:42.005 [2024-11-20 06:44:02.201304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.005 [2024-11-20 06:44:02.201352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.005 [2024-11-20 06:44:02.201362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.005 [2024-11-20 06:44:02.201367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.006 [2024-11-20 06:44:02.201371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.006 [2024-11-20 06:44:02.201382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.006 qpair failed and we were unable to recover it. 00:33:42.006 [2024-11-20 06:44:02.211320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.006 [2024-11-20 06:44:02.211368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.006 [2024-11-20 06:44:02.211378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.006 [2024-11-20 06:44:02.211383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.006 [2024-11-20 06:44:02.211388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.006 [2024-11-20 06:44:02.211398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.006 qpair failed and we were unable to recover it. 00:33:42.006 [2024-11-20 06:44:02.221401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.006 [2024-11-20 06:44:02.221448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.006 [2024-11-20 06:44:02.221460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.006 [2024-11-20 06:44:02.221466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.006 [2024-11-20 06:44:02.221470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.006 [2024-11-20 06:44:02.221481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.006 qpair failed and we were unable to recover it. 00:33:42.006 [2024-11-20 06:44:02.231359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.006 [2024-11-20 06:44:02.231403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.006 [2024-11-20 06:44:02.231412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.006 [2024-11-20 06:44:02.231418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.006 [2024-11-20 06:44:02.231423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.006 [2024-11-20 06:44:02.231433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.006 qpair failed and we were unable to recover it. 00:33:42.006 [2024-11-20 06:44:02.241398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.006 [2024-11-20 06:44:02.241440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.006 [2024-11-20 06:44:02.241449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.006 [2024-11-20 06:44:02.241454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.006 [2024-11-20 06:44:02.241459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.006 [2024-11-20 06:44:02.241470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.006 qpair failed and we were unable to recover it. 00:33:42.006 [2024-11-20 06:44:02.251479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.006 [2024-11-20 06:44:02.251567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.006 [2024-11-20 06:44:02.251577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.006 [2024-11-20 06:44:02.251582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.006 [2024-11-20 06:44:02.251586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.006 [2024-11-20 06:44:02.251596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.006 qpair failed and we were unable to recover it. 00:33:42.006 [2024-11-20 06:44:02.261526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.006 [2024-11-20 06:44:02.261575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.006 [2024-11-20 06:44:02.261584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.006 [2024-11-20 06:44:02.261589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.006 [2024-11-20 06:44:02.261596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.006 [2024-11-20 06:44:02.261607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.006 qpair failed and we were unable to recover it. 00:33:42.006 [2024-11-20 06:44:02.271494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.006 [2024-11-20 06:44:02.271547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.006 [2024-11-20 06:44:02.271557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.006 [2024-11-20 06:44:02.271562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.006 [2024-11-20 06:44:02.271567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.006 [2024-11-20 06:44:02.271577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.006 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.281501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.281546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.281556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.281562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.281567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.281577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.291534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.291627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.291637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.291642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.291646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.291657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.301608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.301661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.301670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.301676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.301681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.301691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.311590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.311664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.311674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.311680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.311684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.311694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.321608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.321649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.321659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.321664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.321669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.321679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.331680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.331730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.331740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.331745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.331750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.331760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.341714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.341765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.341775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.341781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.341785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.341796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.351672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.351713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.351727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.351733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.351739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.351751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.361731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.361781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.361791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.361796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.361800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.361810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.371773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.371822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.371832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.371837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.371842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.371852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-11-20 06:44:02.381695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.269 [2024-11-20 06:44:02.381744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.269 [2024-11-20 06:44:02.381753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.269 [2024-11-20 06:44:02.381758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.269 [2024-11-20 06:44:02.381763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.269 [2024-11-20 06:44:02.381773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.391757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.391827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.391836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.391844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.391849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.391859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.401834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.401875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.401885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.401890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.401895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.401905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.411909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.411960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.411969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.411975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.411980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.411991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.421951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.422022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.422032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.422037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.422042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.422053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.431919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.431964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.431974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.431979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.431984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.431997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.441949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.441994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.442004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.442009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.442014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.442024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.452042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.452094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.452104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.452109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.452114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.452124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.462002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.462077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.462087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.462092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.462097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.462107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.471982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.472022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.472031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.472037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.472041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.472051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.482053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.482099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.482109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.482114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.482119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.482129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.492120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.492173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.492184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.492189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.492194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.492204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.502123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.502183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.502192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.502198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.502202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.270 [2024-11-20 06:44:02.502212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-11-20 06:44:02.512132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.270 [2024-11-20 06:44:02.512173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.270 [2024-11-20 06:44:02.512183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.270 [2024-11-20 06:44:02.512188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.270 [2024-11-20 06:44:02.512192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.271 [2024-11-20 06:44:02.512203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-11-20 06:44:02.522138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.271 [2024-11-20 06:44:02.522186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.271 [2024-11-20 06:44:02.522196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.271 [2024-11-20 06:44:02.522204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.271 [2024-11-20 06:44:02.522208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.271 [2024-11-20 06:44:02.522219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-11-20 06:44:02.532217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.271 [2024-11-20 06:44:02.532267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.271 [2024-11-20 06:44:02.532276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.271 [2024-11-20 06:44:02.532281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.271 [2024-11-20 06:44:02.532286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.271 [2024-11-20 06:44:02.532297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-11-20 06:44:02.542222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.271 [2024-11-20 06:44:02.542271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.271 [2024-11-20 06:44:02.542280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.271 [2024-11-20 06:44:02.542286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.271 [2024-11-20 06:44:02.542291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.271 [2024-11-20 06:44:02.542302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.533 [2024-11-20 06:44:02.552224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.533 [2024-11-20 06:44:02.552269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.533 [2024-11-20 06:44:02.552279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.533 [2024-11-20 06:44:02.552284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.533 [2024-11-20 06:44:02.552289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.533 [2024-11-20 06:44:02.552299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.533 qpair failed and we were unable to recover it. 00:33:42.533 [2024-11-20 06:44:02.562243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.533 [2024-11-20 06:44:02.562286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.533 [2024-11-20 06:44:02.562296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.533 [2024-11-20 06:44:02.562302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.533 [2024-11-20 06:44:02.562306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.533 [2024-11-20 06:44:02.562320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.533 qpair failed and we were unable to recover it. 00:33:42.533 [2024-11-20 06:44:02.572316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.533 [2024-11-20 06:44:02.572366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.533 [2024-11-20 06:44:02.572375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.533 [2024-11-20 06:44:02.572380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.533 [2024-11-20 06:44:02.572385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.533 [2024-11-20 06:44:02.572396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.533 qpair failed and we were unable to recover it. 00:33:42.533 [2024-11-20 06:44:02.582377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.533 [2024-11-20 06:44:02.582431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.533 [2024-11-20 06:44:02.582441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.533 [2024-11-20 06:44:02.582447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.533 [2024-11-20 06:44:02.582451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.533 [2024-11-20 06:44:02.582462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.533 qpair failed and we were unable to recover it. 00:33:42.533 [2024-11-20 06:44:02.592337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.533 [2024-11-20 06:44:02.592382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.533 [2024-11-20 06:44:02.592392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.533 [2024-11-20 06:44:02.592397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.533 [2024-11-20 06:44:02.592402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.533 [2024-11-20 06:44:02.592412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.533 qpair failed and we were unable to recover it. 00:33:42.533 [2024-11-20 06:44:02.602242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.533 [2024-11-20 06:44:02.602288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.533 [2024-11-20 06:44:02.602297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.533 [2024-11-20 06:44:02.602303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.533 [2024-11-20 06:44:02.602307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.533 [2024-11-20 06:44:02.602318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.533 qpair failed and we were unable to recover it. 00:33:42.533 [2024-11-20 06:44:02.612439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.533 [2024-11-20 06:44:02.612512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.533 [2024-11-20 06:44:02.612522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.533 [2024-11-20 06:44:02.612527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.533 [2024-11-20 06:44:02.612532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.533 [2024-11-20 06:44:02.612542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.533 qpair failed and we were unable to recover it. 00:33:42.533 [2024-11-20 06:44:02.622469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.533 [2024-11-20 06:44:02.622513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.533 [2024-11-20 06:44:02.622523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.533 [2024-11-20 06:44:02.622528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.533 [2024-11-20 06:44:02.622533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.533 [2024-11-20 06:44:02.622543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.533 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.632450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.632494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.632504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.632509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.632514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.632525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.642470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.642516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.642526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.642531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.642536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.642546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.652522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.652569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.652582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.652587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.652592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.652602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.662565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.662614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.662626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.662631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.662636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.662647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.672517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.672559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.672569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.672574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.672579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.672589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.682571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.682615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.682624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.682629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.682634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.682644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.692658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.692709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.692719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.692724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.692731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.692742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.702673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.702771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.702781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.702786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.702791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.702801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.712632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.712678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.712688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.712693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.712698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.712708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.722682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.722725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.722735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.722740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.722745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.722755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.732768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.732854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.732863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.732868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.732873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.732884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.742691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.742817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.742828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.742834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.742838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.742849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.534 [2024-11-20 06:44:02.752784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.534 [2024-11-20 06:44:02.752822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.534 [2024-11-20 06:44:02.752833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.534 [2024-11-20 06:44:02.752838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.534 [2024-11-20 06:44:02.752843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.534 [2024-11-20 06:44:02.752853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.534 qpair failed and we were unable to recover it. 00:33:42.535 [2024-11-20 06:44:02.762813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.535 [2024-11-20 06:44:02.762859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.535 [2024-11-20 06:44:02.762869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.535 [2024-11-20 06:44:02.762874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.535 [2024-11-20 06:44:02.762879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.535 [2024-11-20 06:44:02.762889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.535 qpair failed and we were unable to recover it. 00:33:42.535 [2024-11-20 06:44:02.772872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.535 [2024-11-20 06:44:02.772922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.535 [2024-11-20 06:44:02.772933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.535 [2024-11-20 06:44:02.772938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.535 [2024-11-20 06:44:02.772943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.535 [2024-11-20 06:44:02.772953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.535 qpair failed and we were unable to recover it. 00:33:42.535 [2024-11-20 06:44:02.782897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.535 [2024-11-20 06:44:02.782954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.535 [2024-11-20 06:44:02.782968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.535 [2024-11-20 06:44:02.782974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.535 [2024-11-20 06:44:02.782979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.535 [2024-11-20 06:44:02.782990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.535 qpair failed and we were unable to recover it. 00:33:42.535 [2024-11-20 06:44:02.792887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.535 [2024-11-20 06:44:02.792937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.535 [2024-11-20 06:44:02.792950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.535 [2024-11-20 06:44:02.792956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.535 [2024-11-20 06:44:02.792961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.535 [2024-11-20 06:44:02.792972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.535 qpair failed and we were unable to recover it. 00:33:42.535 [2024-11-20 06:44:02.802924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.535 [2024-11-20 06:44:02.803025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.535 [2024-11-20 06:44:02.803045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.535 [2024-11-20 06:44:02.803052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.535 [2024-11-20 06:44:02.803057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.535 [2024-11-20 06:44:02.803072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.535 qpair failed and we were unable to recover it. 00:33:42.795 [2024-11-20 06:44:02.812958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.795 [2024-11-20 06:44:02.813007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.795 [2024-11-20 06:44:02.813018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.795 [2024-11-20 06:44:02.813024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.795 [2024-11-20 06:44:02.813029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d48000b90 00:33:42.795 [2024-11-20 06:44:02.813040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:42.795 qpair failed and we were unable to recover it. 00:33:42.795 [2024-11-20 06:44:02.813223] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:33:42.795 A controller has encountered a failure and is being reset. 00:33:42.795 [2024-11-20 06:44:02.813341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f2e00 (9): Bad file descriptor 00:33:42.795 Controller properly reset. 00:33:42.795 Initializing NVMe Controllers 00:33:42.795 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:42.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:42.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:42.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:42.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:42.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:42.795 Initialization complete. Launching workers. 00:33:42.795 Starting thread on core 1 00:33:42.795 Starting thread on core 2 00:33:42.795 Starting thread on core 3 00:33:42.795 Starting thread on core 0 00:33:42.795 06:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:42.795 00:33:42.795 real 0m11.540s 00:33:42.795 user 0m21.747s 00:33:42.795 sys 0m4.082s 00:33:42.795 06:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:42.795 06:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.795 ************************************ 00:33:42.795 END TEST nvmf_target_disconnect_tc2 00:33:42.795 ************************************ 00:33:42.795 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:42.795 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:42.795 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:42.795 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:42.795 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:33:42.795 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:42.795 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:33:42.795 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:42.795 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:42.795 rmmod nvme_tcp 00:33:42.795 rmmod nvme_fabrics 00:33:43.056 rmmod nvme_keyring 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3029060 ']' 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3029060 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3029060 ']' 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3029060 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3029060 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3029060' 00:33:43.056 killing process with pid 3029060 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3029060 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3029060 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.056 06:44:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.600 06:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.600 00:33:45.600 real 0m21.909s 00:33:45.600 user 0m50.075s 00:33:45.600 sys 0m10.166s 00:33:45.600 06:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:45.600 06:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:45.600 ************************************ 00:33:45.600 END TEST nvmf_target_disconnect 00:33:45.600 ************************************ 00:33:45.600 06:44:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:45.600 00:33:45.600 real 6m34.152s 00:33:45.600 user 11m31.264s 00:33:45.600 sys 2m16.458s 00:33:45.600 06:44:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:45.600 06:44:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.600 ************************************ 00:33:45.600 END TEST nvmf_host 00:33:45.600 ************************************ 00:33:45.600 06:44:05 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:33:45.600 06:44:05 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:33:45.600 06:44:05 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:45.600 06:44:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:45.600 06:44:05 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:45.600 06:44:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.600 ************************************ 00:33:45.600 START TEST nvmf_target_core_interrupt_mode 00:33:45.600 ************************************ 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:45.600 * Looking for test storage... 00:33:45.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:33:45.600 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.601 --rc genhtml_branch_coverage=1 00:33:45.601 --rc genhtml_function_coverage=1 00:33:45.601 --rc genhtml_legend=1 00:33:45.601 --rc geninfo_all_blocks=1 00:33:45.601 --rc geninfo_unexecuted_blocks=1 00:33:45.601 00:33:45.601 ' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.601 --rc genhtml_branch_coverage=1 00:33:45.601 --rc genhtml_function_coverage=1 00:33:45.601 --rc genhtml_legend=1 00:33:45.601 --rc geninfo_all_blocks=1 00:33:45.601 --rc geninfo_unexecuted_blocks=1 00:33:45.601 00:33:45.601 ' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.601 --rc genhtml_branch_coverage=1 00:33:45.601 --rc genhtml_function_coverage=1 00:33:45.601 --rc genhtml_legend=1 00:33:45.601 --rc geninfo_all_blocks=1 00:33:45.601 --rc geninfo_unexecuted_blocks=1 00:33:45.601 00:33:45.601 ' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.601 --rc genhtml_branch_coverage=1 00:33:45.601 --rc genhtml_function_coverage=1 00:33:45.601 --rc genhtml_legend=1 00:33:45.601 --rc geninfo_all_blocks=1 00:33:45.601 --rc geninfo_unexecuted_blocks=1 00:33:45.601 00:33:45.601 ' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:45.601 ************************************ 00:33:45.601 START TEST nvmf_abort 00:33:45.601 ************************************ 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:45.601 * Looking for test storage... 00:33:45.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:33:45.601 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.863 --rc genhtml_branch_coverage=1 00:33:45.863 --rc genhtml_function_coverage=1 00:33:45.863 --rc genhtml_legend=1 00:33:45.863 --rc geninfo_all_blocks=1 00:33:45.863 --rc geninfo_unexecuted_blocks=1 00:33:45.863 00:33:45.863 ' 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.863 --rc genhtml_branch_coverage=1 00:33:45.863 --rc genhtml_function_coverage=1 00:33:45.863 --rc genhtml_legend=1 00:33:45.863 --rc geninfo_all_blocks=1 00:33:45.863 --rc geninfo_unexecuted_blocks=1 00:33:45.863 00:33:45.863 ' 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.863 --rc genhtml_branch_coverage=1 00:33:45.863 --rc genhtml_function_coverage=1 00:33:45.863 --rc genhtml_legend=1 00:33:45.863 --rc geninfo_all_blocks=1 00:33:45.863 --rc geninfo_unexecuted_blocks=1 00:33:45.863 00:33:45.863 ' 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.863 --rc genhtml_branch_coverage=1 00:33:45.863 --rc genhtml_function_coverage=1 00:33:45.863 --rc genhtml_legend=1 00:33:45.863 --rc geninfo_all_blocks=1 00:33:45.863 --rc geninfo_unexecuted_blocks=1 00:33:45.863 00:33:45.863 ' 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.863 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:45.864 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:33:45.864 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.008 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:54.009 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:54.009 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:54.009 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:54.009 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.009 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:54.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:33:54.010 00:33:54.010 --- 10.0.0.2 ping statistics --- 00:33:54.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.010 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:33:54.010 00:33:54.010 --- 10.0.0.1 ping statistics --- 00:33:54.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.010 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3034598 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3034598 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3034598 ']' 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:54.010 06:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.010 [2024-11-20 06:44:13.564190] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:54.010 [2024-11-20 06:44:13.565314] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:33:54.010 [2024-11-20 06:44:13.565364] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.010 [2024-11-20 06:44:13.665838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:54.010 [2024-11-20 06:44:13.717611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.010 [2024-11-20 06:44:13.717662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.010 [2024-11-20 06:44:13.717671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.010 [2024-11-20 06:44:13.717678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.010 [2024-11-20 06:44:13.717685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.010 [2024-11-20 06:44:13.719489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.010 [2024-11-20 06:44:13.719650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.010 [2024-11-20 06:44:13.719651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:54.010 [2024-11-20 06:44:13.799477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:54.010 [2024-11-20 06:44:13.800549] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:54.010 [2024-11-20 06:44:13.801117] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:54.010 [2024-11-20 06:44:13.801214] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.274 [2024-11-20 06:44:14.424573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.274 Malloc0 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.274 Delay0 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.274 [2024-11-20 06:44:14.524508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.274 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:33:54.536 [2024-11-20 06:44:14.669918] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:56.499 Initializing NVMe Controllers 00:33:56.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:56.499 controller IO queue size 128 less than required 00:33:56.499 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:33:56.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:33:56.499 Initialization complete. Launching workers. 00:33:56.499 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28288 00:33:56.499 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28349, failed to submit 66 00:33:56.499 success 28288, unsuccessful 61, failed 0 00:33:56.782 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:56.782 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.782 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:56.782 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:56.783 rmmod nvme_tcp 00:33:56.783 rmmod nvme_fabrics 00:33:56.783 rmmod nvme_keyring 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3034598 ']' 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3034598 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3034598 ']' 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3034598 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3034598 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3034598' 00:33:56.783 killing process with pid 3034598 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3034598 00:33:56.783 06:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3034598 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.049 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.962 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.962 00:33:58.962 real 0m13.393s 00:33:58.962 user 0m10.906s 00:33:58.962 sys 0m7.049s 00:33:58.962 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:58.962 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:58.962 ************************************ 00:33:58.962 END TEST nvmf_abort 00:33:58.962 ************************************ 00:33:58.962 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:58.962 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:58.962 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:58.962 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:59.223 ************************************ 00:33:59.223 START TEST nvmf_ns_hotplug_stress 00:33:59.223 ************************************ 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:59.224 * Looking for test storage... 00:33:59.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:59.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.224 --rc genhtml_branch_coverage=1 00:33:59.224 --rc genhtml_function_coverage=1 00:33:59.224 --rc genhtml_legend=1 00:33:59.224 --rc geninfo_all_blocks=1 00:33:59.224 --rc geninfo_unexecuted_blocks=1 00:33:59.224 00:33:59.224 ' 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:59.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.224 --rc genhtml_branch_coverage=1 00:33:59.224 --rc genhtml_function_coverage=1 00:33:59.224 --rc genhtml_legend=1 00:33:59.224 --rc geninfo_all_blocks=1 00:33:59.224 --rc geninfo_unexecuted_blocks=1 00:33:59.224 00:33:59.224 ' 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:59.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.224 --rc genhtml_branch_coverage=1 00:33:59.224 --rc genhtml_function_coverage=1 00:33:59.224 --rc genhtml_legend=1 00:33:59.224 --rc geninfo_all_blocks=1 00:33:59.224 --rc geninfo_unexecuted_blocks=1 00:33:59.224 00:33:59.224 ' 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:59.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.224 --rc genhtml_branch_coverage=1 00:33:59.224 --rc genhtml_function_coverage=1 00:33:59.224 --rc genhtml_legend=1 00:33:59.224 --rc geninfo_all_blocks=1 00:33:59.224 --rc geninfo_unexecuted_blocks=1 00:33:59.224 00:33:59.224 ' 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:59.224 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:33:59.225 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:07.369 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:07.369 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:07.369 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.369 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:07.369 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:07.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:34:07.370 00:34:07.370 --- 10.0.0.2 ping statistics --- 00:34:07.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.370 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:34:07.370 00:34:07.370 --- 10.0.0.1 ping statistics --- 00:34:07.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.370 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3039499 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3039499 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3039499 ']' 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:07.370 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:07.370 [2024-11-20 06:44:27.012082] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:07.370 [2024-11-20 06:44:27.013219] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:34:07.370 [2024-11-20 06:44:27.013269] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.370 [2024-11-20 06:44:27.111756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:07.370 [2024-11-20 06:44:27.162853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.370 [2024-11-20 06:44:27.162902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.370 [2024-11-20 06:44:27.162910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.370 [2024-11-20 06:44:27.162918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.370 [2024-11-20 06:44:27.162924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.370 [2024-11-20 06:44:27.164992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:07.370 [2024-11-20 06:44:27.165165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.370 [2024-11-20 06:44:27.165178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:07.370 [2024-11-20 06:44:27.242680] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:07.370 [2024-11-20 06:44:27.243622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:07.370 [2024-11-20 06:44:27.244296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:07.370 [2024-11-20 06:44:27.244432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:07.631 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:07.631 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:34:07.631 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:07.631 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:07.631 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:07.631 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.631 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:34:07.631 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:07.893 [2024-11-20 06:44:28.030156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:07.893 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:08.153 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.153 [2024-11-20 06:44:28.411024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.153 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:08.447 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:34:08.708 Malloc0 00:34:08.708 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:08.708 Delay0 00:34:08.708 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:08.968 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:34:09.229 NULL1 00:34:09.229 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:34:09.489 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3039869 00:34:09.489 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:09.489 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:34:09.489 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:10.876 Read completed with error (sct=0, sc=11) 00:34:10.876 06:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:10.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:10.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:10.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:10.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:10.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:10.876 06:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:34:10.876 06:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:34:10.876 true 00:34:11.138 06:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:11.138 06:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:11.709 06:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:11.970 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:34:11.970 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:34:12.230 true 00:34:12.230 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:12.230 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:12.230 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:12.491 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:34:12.491 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:34:12.752 true 00:34:12.752 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:12.752 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:14.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:14.137 06:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:14.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:14.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:14.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:14.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:14.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:14.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:14.137 06:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:34:14.137 06:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:34:14.398 true 00:34:14.398 06:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:14.398 06:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:15.339 06:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:15.339 06:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:34:15.339 06:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:34:15.339 true 00:34:15.600 06:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:15.600 06:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:15.600 06:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:15.859 06:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:34:15.859 06:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:34:16.120 true 00:34:16.120 06:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:16.120 06:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:17.503 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:17.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:17.503 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:34:17.503 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:34:17.503 true 00:34:17.503 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:17.503 06:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:18.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.444 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:18.705 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:34:18.705 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:34:18.705 true 00:34:18.705 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:18.705 06:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:18.966 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:19.226 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:34:19.226 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:34:19.226 true 00:34:19.226 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:19.226 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:19.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.487 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:19.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.748 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:34:19.748 06:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:34:20.009 true 00:34:20.009 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:20.009 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:20.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:20.951 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:20.951 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:34:20.951 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:34:20.951 true 00:34:21.212 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:21.212 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:21.212 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:21.492 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:34:21.492 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:34:21.755 true 00:34:21.756 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:21.756 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:21.756 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:22.017 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:34:22.017 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:34:22.278 true 00:34:22.278 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:22.278 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:22.278 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:22.540 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:34:22.540 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:34:22.800 true 00:34:22.800 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:22.800 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:24.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:24.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:24.186 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:24.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:24.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:24.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:24.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:24.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:24.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:24.186 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:34:24.186 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:34:24.186 true 00:34:24.448 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:24.448 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:25.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.282 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:25.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.282 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:34:25.282 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:34:25.543 true 00:34:25.543 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:25.543 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:25.804 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:25.804 06:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:34:25.804 06:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:34:26.065 true 00:34:26.065 06:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:26.065 06:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:27.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.452 06:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:27.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.452 06:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:34:27.452 06:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:34:27.714 true 00:34:27.714 06:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:27.714 06:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:28.660 06:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:28.660 06:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:34:28.660 06:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:34:28.921 true 00:34:28.921 06:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:28.921 06:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:28.921 06:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:29.181 06:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:34:29.181 06:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:34:29.441 true 00:34:29.441 06:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:29.441 06:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:30.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:30.384 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:30.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:30.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:30.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:30.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:30.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:30.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:30.645 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:34:30.645 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:34:30.905 true 00:34:30.905 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:30.905 06:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:31.848 06:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:31.848 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:34:31.848 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:34:32.109 true 00:34:32.109 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:32.109 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:32.109 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:32.370 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:34:32.370 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:34:32.630 true 00:34:32.630 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:32.630 06:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:33.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:33.571 06:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:33.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:33.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:33.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:33.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:33.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:33.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:33.832 06:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:34:33.832 06:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:34:34.093 true 00:34:34.093 06:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:34.093 06:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.036 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:35.036 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:34:35.036 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:34:35.296 true 00:34:35.297 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:35.297 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.556 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:35.556 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:34:35.556 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:34:35.817 true 00:34:35.817 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:35.817 06:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:36.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:37.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:37.019 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:37.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:37.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:37.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:37.019 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:34:37.019 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:34:37.279 true 00:34:37.279 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:37.279 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:37.540 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:37.540 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:34:37.541 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:34:37.801 true 00:34:37.801 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:37.801 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:38.063 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:38.324 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:34:38.324 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:34:38.324 true 00:34:38.324 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:38.324 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:38.585 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:38.846 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:34:38.846 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:34:38.846 true 00:34:38.846 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:38.846 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:39.105 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:39.365 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:34:39.365 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:34:39.365 true 00:34:39.626 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:39.626 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:39.626 Initializing NVMe Controllers 00:34:39.626 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:39.626 Controller IO queue size 128, less than required. 00:34:39.626 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:39.626 Controller IO queue size 128, less than required. 00:34:39.626 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:39.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:39.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:39.626 Initialization complete. Launching workers. 00:34:39.626 ======================================================== 00:34:39.626 Latency(us) 00:34:39.626 Device Information : IOPS MiB/s Average min max 00:34:39.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2320.11 1.13 33813.60 1397.09 1085802.50 00:34:39.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18001.68 8.79 7110.14 1170.02 342150.76 00:34:39.626 ======================================================== 00:34:39.626 Total : 20321.78 9.92 10158.83 1170.02 1085802.50 00:34:39.626 00:34:39.626 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:39.887 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:34:39.887 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:34:39.887 true 00:34:39.887 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039869 00:34:39.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3039869) - No such process 00:34:39.887 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3039869 00:34:39.887 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:40.147 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:40.407 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:34:40.407 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:34:40.407 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:34:40.408 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:40.408 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:34:40.408 null0 00:34:40.408 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:40.408 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:40.408 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:34:40.668 null1 00:34:40.668 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:40.668 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:40.668 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:34:40.668 null2 00:34:40.668 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:40.668 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:40.668 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:34:40.928 null3 00:34:40.928 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:40.928 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:40.928 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:34:41.188 null4 00:34:41.188 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:41.188 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:41.188 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:34:41.188 null5 00:34:41.188 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:41.188 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:41.188 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:34:41.448 null6 00:34:41.448 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:41.448 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:41.448 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:34:41.710 null7 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3046291 3046294 3046297 3046300 3046303 3046306 3046309 3046312 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.710 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:41.711 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:41.711 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:41.711 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:41.971 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:41.972 06:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.972 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:42.232 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.232 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.232 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:42.232 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:42.232 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:42.233 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:42.233 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:42.233 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:42.233 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:42.233 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:42.233 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:42.233 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.233 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.233 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:42.492 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:42.493 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:42.493 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:42.493 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:42.493 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:42.493 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:42.752 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:42.752 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.013 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.274 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:43.275 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.534 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.535 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:43.535 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.535 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.535 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:43.535 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.535 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.535 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:43.535 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.535 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.535 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:43.795 06:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.056 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:44.317 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:44.578 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:44.840 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:44.840 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:44.840 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.840 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.840 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:44.840 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:44.840 06:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:44.840 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.840 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.840 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:44.840 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:44.840 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:44.840 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:44.840 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:44.840 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:45.101 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:45.101 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.101 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.101 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:45.101 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.102 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.364 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:45.625 rmmod nvme_tcp 00:34:45.625 rmmod nvme_fabrics 00:34:45.625 rmmod nvme_keyring 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3039499 ']' 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3039499 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3039499 ']' 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3039499 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3039499 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3039499' 00:34:45.625 killing process with pid 3039499 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3039499 00:34:45.625 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3039499 00:34:45.887 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:45.887 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:45.887 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:45.887 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:34:45.887 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:34:45.887 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:45.887 06:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:34:45.887 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:45.887 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:45.887 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.887 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.887 06:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.922 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:47.922 00:34:47.922 real 0m48.832s 00:34:47.922 user 2m58.143s 00:34:47.922 sys 0m20.481s 00:34:47.922 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:47.922 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:47.922 ************************************ 00:34:47.922 END TEST nvmf_ns_hotplug_stress 00:34:47.922 ************************************ 00:34:47.922 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:47.922 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:47.922 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:47.922 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:47.922 ************************************ 00:34:47.922 START TEST nvmf_delete_subsystem 00:34:47.922 ************************************ 00:34:47.922 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:48.184 * Looking for test storage... 00:34:48.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:48.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.184 --rc genhtml_branch_coverage=1 00:34:48.184 --rc genhtml_function_coverage=1 00:34:48.184 --rc genhtml_legend=1 00:34:48.184 --rc geninfo_all_blocks=1 00:34:48.184 --rc geninfo_unexecuted_blocks=1 00:34:48.184 00:34:48.184 ' 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:48.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.184 --rc genhtml_branch_coverage=1 00:34:48.184 --rc genhtml_function_coverage=1 00:34:48.184 --rc genhtml_legend=1 00:34:48.184 --rc geninfo_all_blocks=1 00:34:48.184 --rc geninfo_unexecuted_blocks=1 00:34:48.184 00:34:48.184 ' 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:48.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.184 --rc genhtml_branch_coverage=1 00:34:48.184 --rc genhtml_function_coverage=1 00:34:48.184 --rc genhtml_legend=1 00:34:48.184 --rc geninfo_all_blocks=1 00:34:48.184 --rc geninfo_unexecuted_blocks=1 00:34:48.184 00:34:48.184 ' 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:48.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.184 --rc genhtml_branch_coverage=1 00:34:48.184 --rc genhtml_function_coverage=1 00:34:48.184 --rc genhtml_legend=1 00:34:48.184 --rc geninfo_all_blocks=1 00:34:48.184 --rc geninfo_unexecuted_blocks=1 00:34:48.184 00:34:48.184 ' 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.184 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:34:48.185 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:56.330 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:56.330 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:56.330 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.330 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:56.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:56.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:34:56.331 00:34:56.331 --- 10.0.0.2 ping statistics --- 00:34:56.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.331 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:34:56.331 00:34:56.331 --- 10.0.0.1 ping statistics --- 00:34:56.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.331 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3051835 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3051835 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3051835 ']' 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:56.331 06:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:56.331 [2024-11-20 06:45:15.996549] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:56.331 [2024-11-20 06:45:15.997690] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:34:56.331 [2024-11-20 06:45:15.997745] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.331 [2024-11-20 06:45:16.097248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:56.331 [2024-11-20 06:45:16.148703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.331 [2024-11-20 06:45:16.148756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.331 [2024-11-20 06:45:16.148771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.331 [2024-11-20 06:45:16.148778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.331 [2024-11-20 06:45:16.148785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.331 [2024-11-20 06:45:16.150354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.331 [2024-11-20 06:45:16.150389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.331 [2024-11-20 06:45:16.228425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:56.331 [2024-11-20 06:45:16.229250] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:56.331 [2024-11-20 06:45:16.229449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:56.592 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:56.592 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:34:56.592 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.592 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:56.592 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:56.592 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.592 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:56.592 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.592 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:56.592 [2024-11-20 06:45:16.863407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.853 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:56.854 [2024-11-20 06:45:16.895989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:56.854 NULL1 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:56.854 Delay0 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3052047 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:34:56.854 06:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:56.854 [2024-11-20 06:45:17.018354] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:58.771 06:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.771 06:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.771 06:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 [2024-11-20 06:45:19.119380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54c680 is same with the state(6) to be set 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 starting I/O failed: -6 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 [2024-11-20 06:45:19.119925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f283400d490 is same with the state(6) to be set 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.032 Write completed with error (sct=0, sc=8) 00:34:59.032 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Write completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 Read completed with error (sct=0, sc=8) 00:34:59.033 [2024-11-20 06:45:19.120355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54c2c0 is same with the state(6) to be set 00:34:59.977 [2024-11-20 06:45:20.076704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54d9a0 is same with the state(6) to be set 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 [2024-11-20 06:45:20.119518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f283400d020 is same with the state(6) to be set 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 [2024-11-20 06:45:20.119700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54c860 is same with the state(6) to be set 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 [2024-11-20 06:45:20.119828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54c4a0 is same with the state(6) to be set 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Read completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 Write completed with error (sct=0, sc=8) 00:34:59.977 [2024-11-20 06:45:20.119891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f283400d7c0 is same with the state(6) to be set 00:34:59.977 Initializing NVMe Controllers 00:34:59.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:59.977 Controller IO queue size 128, less than required. 00:34:59.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:59.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:59.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:59.977 Initialization complete. Launching workers. 00:34:59.977 ======================================================== 00:34:59.977 Latency(us) 00:34:59.977 Device Information : IOPS MiB/s Average min max 00:34:59.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.73 0.08 911352.33 994.15 1013280.35 00:34:59.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.34 0.07 951949.32 423.36 1046909.89 00:34:59.977 ======================================================== 00:34:59.978 Total : 311.08 0.15 930711.98 423.36 1046909.89 00:34:59.978 00:34:59.978 [2024-11-20 06:45:20.120564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54d9a0 (9): Bad file descriptor 00:34:59.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:34:59.978 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.978 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:34:59.978 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3052047 00:34:59.978 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3052047 00:35:00.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3052047) - No such process 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3052047 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3052047 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3052047 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:00.550 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:00.551 [2024-11-20 06:45:20.655827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3052720 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3052720 00:35:00.551 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:00.551 [2024-11-20 06:45:20.755499] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:01.123 06:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:01.123 06:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3052720 00:35:01.123 06:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:01.693 06:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:01.693 06:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3052720 00:35:01.693 06:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:01.954 06:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:01.954 06:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3052720 00:35:01.954 06:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:02.526 06:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:02.526 06:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3052720 00:35:02.526 06:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:03.097 06:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:03.097 06:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3052720 00:35:03.097 06:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:03.672 06:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:03.672 06:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3052720 00:35:03.672 06:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:03.672 Initializing NVMe Controllers 00:35:03.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:03.672 Controller IO queue size 128, less than required. 00:35:03.672 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:03.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:03.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:03.672 Initialization complete. Launching workers. 00:35:03.672 ======================================================== 00:35:03.672 Latency(us) 00:35:03.672 Device Information : IOPS MiB/s Average min max 00:35:03.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002473.99 1000151.78 1042311.50 00:35:03.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005243.09 1000415.00 1043296.08 00:35:03.672 ======================================================== 00:35:03.672 Total : 256.00 0.12 1003858.54 1000151.78 1043296.08 00:35:03.672 00:35:03.933 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:03.933 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3052720 00:35:03.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3052720) - No such process 00:35:03.933 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3052720 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:04.195 rmmod nvme_tcp 00:35:04.195 rmmod nvme_fabrics 00:35:04.195 rmmod nvme_keyring 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3051835 ']' 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3051835 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3051835 ']' 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3051835 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3051835 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3051835' 00:35:04.195 killing process with pid 3051835 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3051835 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3051835 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:04.195 06:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:06.742 00:35:06.742 real 0m18.362s 00:35:06.742 user 0m26.513s 00:35:06.742 sys 0m7.455s 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:06.742 ************************************ 00:35:06.742 END TEST nvmf_delete_subsystem 00:35:06.742 ************************************ 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:06.742 ************************************ 00:35:06.742 START TEST nvmf_host_management 00:35:06.742 ************************************ 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:06.742 * Looking for test storage... 00:35:06.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:06.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.742 --rc genhtml_branch_coverage=1 00:35:06.742 --rc genhtml_function_coverage=1 00:35:06.742 --rc genhtml_legend=1 00:35:06.742 --rc geninfo_all_blocks=1 00:35:06.742 --rc geninfo_unexecuted_blocks=1 00:35:06.742 00:35:06.742 ' 00:35:06.742 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:06.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.742 --rc genhtml_branch_coverage=1 00:35:06.742 --rc genhtml_function_coverage=1 00:35:06.743 --rc genhtml_legend=1 00:35:06.743 --rc geninfo_all_blocks=1 00:35:06.743 --rc geninfo_unexecuted_blocks=1 00:35:06.743 00:35:06.743 ' 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:06.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.743 --rc genhtml_branch_coverage=1 00:35:06.743 --rc genhtml_function_coverage=1 00:35:06.743 --rc genhtml_legend=1 00:35:06.743 --rc geninfo_all_blocks=1 00:35:06.743 --rc geninfo_unexecuted_blocks=1 00:35:06.743 00:35:06.743 ' 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:06.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.743 --rc genhtml_branch_coverage=1 00:35:06.743 --rc genhtml_function_coverage=1 00:35:06.743 --rc genhtml_legend=1 00:35:06.743 --rc geninfo_all_blocks=1 00:35:06.743 --rc geninfo_unexecuted_blocks=1 00:35:06.743 00:35:06.743 ' 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:35:06.743 06:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:14.883 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:14.883 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:14.883 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:14.883 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:14.883 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:14.883 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:14.883 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:14.883 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:14.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:14.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.738 ms 00:35:14.884 00:35:14.884 --- 10.0.0.2 ping statistics --- 00:35:14.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.884 rtt min/avg/max/mdev = 0.738/0.738/0.738/0.000 ms 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:14.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:14.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:35:14.884 00:35:14.884 --- 10.0.0.1 ping statistics --- 00:35:14.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.884 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3057654 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3057654 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3057654 ']' 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:14.884 06:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:14.884 [2024-11-20 06:45:34.373097] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:14.884 [2024-11-20 06:45:34.374256] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:35:14.884 [2024-11-20 06:45:34.374308] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:14.884 [2024-11-20 06:45:34.474532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:14.884 [2024-11-20 06:45:34.527266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:14.884 [2024-11-20 06:45:34.527313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:14.884 [2024-11-20 06:45:34.527321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:14.884 [2024-11-20 06:45:34.527329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:14.884 [2024-11-20 06:45:34.527335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:14.884 [2024-11-20 06:45:34.529366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:14.884 [2024-11-20 06:45:34.529595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:14.884 [2024-11-20 06:45:34.529754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:14.884 [2024-11-20 06:45:34.529756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.884 [2024-11-20 06:45:34.608256] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:14.884 [2024-11-20 06:45:34.609233] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:14.884 [2024-11-20 06:45:34.609589] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:14.884 [2024-11-20 06:45:34.610134] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:14.884 [2024-11-20 06:45:34.610190] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:15.146 [2024-11-20 06:45:35.246772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:15.146 Malloc0 00:35:15.146 [2024-11-20 06:45:35.350991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3057769 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3057769 /var/tmp/bdevperf.sock 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3057769 ']' 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:15.146 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:15.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:15.147 { 00:35:15.147 "params": { 00:35:15.147 "name": "Nvme$subsystem", 00:35:15.147 "trtype": "$TEST_TRANSPORT", 00:35:15.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.147 "adrfam": "ipv4", 00:35:15.147 "trsvcid": "$NVMF_PORT", 00:35:15.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.147 "hdgst": ${hdgst:-false}, 00:35:15.147 "ddgst": ${ddgst:-false} 00:35:15.147 }, 00:35:15.147 "method": "bdev_nvme_attach_controller" 00:35:15.147 } 00:35:15.147 EOF 00:35:15.147 )") 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:35:15.147 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:35:15.408 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:35:15.408 06:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:15.408 "params": { 00:35:15.408 "name": "Nvme0", 00:35:15.408 "trtype": "tcp", 00:35:15.408 "traddr": "10.0.0.2", 00:35:15.408 "adrfam": "ipv4", 00:35:15.408 "trsvcid": "4420", 00:35:15.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.408 "hdgst": false, 00:35:15.408 "ddgst": false 00:35:15.408 }, 00:35:15.408 "method": "bdev_nvme_attach_controller" 00:35:15.408 }' 00:35:15.408 [2024-11-20 06:45:35.460726] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:35:15.408 [2024-11-20 06:45:35.460796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057769 ] 00:35:15.408 [2024-11-20 06:45:35.555268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.408 [2024-11-20 06:45:35.608799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.669 Running I/O for 10 seconds... 00:35:16.242 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:16.242 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:35:16.242 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:16.242 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.242 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:16.242 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.242 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.243 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:16.243 [2024-11-20 06:45:36.358478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120bf20 is same with the state(6) to be set 00:35:16.243 [2024-11-20 06:45:36.358541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120bf20 is same with the state(6) to be set 00:35:16.243 [2024-11-20 06:45:36.358551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120bf20 is same with the state(6) to be set 00:35:16.243 [2024-11-20 06:45:36.360354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.243 [2024-11-20 06:45:36.360933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.243 [2024-11-20 06:45:36.360943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.360950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.360960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.360967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.360978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.360987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.360996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.244 [2024-11-20 06:45:36.361604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.244 [2024-11-20 06:45:36.361645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:16.244 [2024-11-20 06:45:36.362943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:16.244 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.244 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:16.244 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.244 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:16.244 task offset: 109312 on job bdev=Nvme0n1 fails 00:35:16.244 00:35:16.244 Latency(us) 00:35:16.245 [2024-11-20T05:45:36.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.245 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:16.245 Job: Nvme0n1 ended in about 0.59 seconds with error 00:35:16.245 Verification LBA range: start 0x0 length 0x400 00:35:16.245 Nvme0n1 : 0.59 1438.21 89.89 109.19 0.00 40383.26 1761.28 39540.05 00:35:16.245 [2024-11-20T05:45:36.524Z] =================================================================================================================== 00:35:16.245 [2024-11-20T05:45:36.524Z] Total : 1438.21 89.89 109.19 0.00 40383.26 1761.28 39540.05 00:35:16.245 [2024-11-20 06:45:36.365212] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:16.245 [2024-11-20 06:45:36.365252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe10000 (9): Bad file descriptor 00:35:16.245 [2024-11-20 06:45:36.366786] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:35:16.245 [2024-11-20 06:45:36.366876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:35:16.245 [2024-11-20 06:45:36.366915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:16.245 [2024-11-20 06:45:36.366930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:35:16.245 [2024-11-20 06:45:36.366940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:35:16.245 [2024-11-20 06:45:36.366949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.245 [2024-11-20 06:45:36.366956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe10000 00:35:16.245 [2024-11-20 06:45:36.366978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe10000 (9): Bad file descriptor 00:35:16.245 [2024-11-20 06:45:36.366993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:16.245 [2024-11-20 06:45:36.367002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:16.245 [2024-11-20 06:45:36.367013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:16.245 [2024-11-20 06:45:36.367024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:16.245 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.245 06:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:35:17.188 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3057769 00:35:17.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3057769) - No such process 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:17.189 { 00:35:17.189 "params": { 00:35:17.189 "name": "Nvme$subsystem", 00:35:17.189 "trtype": "$TEST_TRANSPORT", 00:35:17.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:17.189 "adrfam": "ipv4", 00:35:17.189 "trsvcid": "$NVMF_PORT", 00:35:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:17.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:17.189 "hdgst": ${hdgst:-false}, 00:35:17.189 "ddgst": ${ddgst:-false} 00:35:17.189 }, 00:35:17.189 "method": "bdev_nvme_attach_controller" 00:35:17.189 } 00:35:17.189 EOF 00:35:17.189 )") 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:35:17.189 06:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:17.189 "params": { 00:35:17.189 "name": "Nvme0", 00:35:17.189 "trtype": "tcp", 00:35:17.189 "traddr": "10.0.0.2", 00:35:17.189 "adrfam": "ipv4", 00:35:17.189 "trsvcid": "4420", 00:35:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:17.189 "hdgst": false, 00:35:17.189 "ddgst": false 00:35:17.189 }, 00:35:17.189 "method": "bdev_nvme_attach_controller" 00:35:17.189 }' 00:35:17.189 [2024-11-20 06:45:37.440243] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:35:17.189 [2024-11-20 06:45:37.440319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058140 ] 00:35:17.450 [2024-11-20 06:45:37.535520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.450 [2024-11-20 06:45:37.587490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.710 Running I/O for 1 seconds... 00:35:18.649 1472.00 IOPS, 92.00 MiB/s 00:35:18.649 Latency(us) 00:35:18.649 [2024-11-20T05:45:38.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.649 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:18.649 Verification LBA range: start 0x0 length 0x400 00:35:18.649 Nvme0n1 : 1.02 1500.78 93.80 0.00 0.00 41924.46 7099.73 37573.97 00:35:18.649 [2024-11-20T05:45:38.928Z] =================================================================================================================== 00:35:18.649 [2024-11-20T05:45:38.928Z] Total : 1500.78 93.80 0.00 0.00 41924.46 7099.73 37573.97 00:35:18.909 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:35:18.910 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:35:18.910 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:18.910 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:18.910 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:35:18.910 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:18.910 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:35:18.910 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.910 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:35:18.910 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.910 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.910 rmmod nvme_tcp 00:35:18.910 rmmod nvme_fabrics 00:35:18.910 rmmod nvme_keyring 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3057654 ']' 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3057654 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3057654 ']' 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3057654 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3057654 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3057654' 00:35:18.910 killing process with pid 3057654 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3057654 00:35:18.910 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3057654 00:35:19.170 [2024-11-20 06:45:39.190153] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.170 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.078 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.078 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:35:21.078 00:35:21.078 real 0m14.701s 00:35:21.078 user 0m19.470s 00:35:21.078 sys 0m7.444s 00:35:21.078 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:21.078 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:21.078 ************************************ 00:35:21.078 END TEST nvmf_host_management 00:35:21.078 ************************************ 00:35:21.078 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:21.078 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:21.078 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:21.078 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:21.339 ************************************ 00:35:21.339 START TEST nvmf_lvol 00:35:21.339 ************************************ 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:21.339 * Looking for test storage... 00:35:21.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:21.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.339 --rc genhtml_branch_coverage=1 00:35:21.339 --rc genhtml_function_coverage=1 00:35:21.339 --rc genhtml_legend=1 00:35:21.339 --rc geninfo_all_blocks=1 00:35:21.339 --rc geninfo_unexecuted_blocks=1 00:35:21.339 00:35:21.339 ' 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:21.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.339 --rc genhtml_branch_coverage=1 00:35:21.339 --rc genhtml_function_coverage=1 00:35:21.339 --rc genhtml_legend=1 00:35:21.339 --rc geninfo_all_blocks=1 00:35:21.339 --rc geninfo_unexecuted_blocks=1 00:35:21.339 00:35:21.339 ' 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:21.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.339 --rc genhtml_branch_coverage=1 00:35:21.339 --rc genhtml_function_coverage=1 00:35:21.339 --rc genhtml_legend=1 00:35:21.339 --rc geninfo_all_blocks=1 00:35:21.339 --rc geninfo_unexecuted_blocks=1 00:35:21.339 00:35:21.339 ' 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:21.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.339 --rc genhtml_branch_coverage=1 00:35:21.339 --rc genhtml_function_coverage=1 00:35:21.339 --rc genhtml_legend=1 00:35:21.339 --rc geninfo_all_blocks=1 00:35:21.339 --rc geninfo_unexecuted_blocks=1 00:35:21.339 00:35:21.339 ' 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.339 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:35:21.600 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:35:21.601 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:29.737 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:29.737 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.737 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:29.738 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:29.738 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:29.738 06:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:29.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:35:29.738 00:35:29.738 --- 10.0.0.2 ping statistics --- 00:35:29.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.738 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:35:29.738 00:35:29.738 --- 10.0.0.1 ping statistics --- 00:35:29.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.738 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3062773 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3062773 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3062773 ']' 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:29.738 [2024-11-20 06:45:49.167701] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:29.738 [2024-11-20 06:45:49.168946] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:35:29.738 [2024-11-20 06:45:49.169001] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:29.738 [2024-11-20 06:45:49.267431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:29.738 [2024-11-20 06:45:49.318937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:29.738 [2024-11-20 06:45:49.318987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:29.738 [2024-11-20 06:45:49.318996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:29.738 [2024-11-20 06:45:49.319003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:29.738 [2024-11-20 06:45:49.319009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:29.738 [2024-11-20 06:45:49.320878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.738 [2024-11-20 06:45:49.321035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.738 [2024-11-20 06:45:49.321037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:29.738 [2024-11-20 06:45:49.398886] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:29.738 [2024-11-20 06:45:49.399913] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:29.738 [2024-11-20 06:45:49.400444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:29.738 [2024-11-20 06:45:49.400586] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:29.738 06:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:30.002 06:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:30.002 06:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:30.002 [2024-11-20 06:45:50.181918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.002 06:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:30.262 06:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:35:30.262 06:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:30.521 06:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:35:30.521 06:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:35:30.782 06:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:35:30.782 06:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b899028e-f44a-4112-ac3b-7685609b87df 00:35:30.782 06:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b899028e-f44a-4112-ac3b-7685609b87df lvol 20 00:35:31.042 06:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6ab9b1d9-1bd0-4268-a832-2476cf52d762 00:35:31.042 06:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:31.303 06:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6ab9b1d9-1bd0-4268-a832-2476cf52d762 00:35:31.303 06:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:31.564 [2024-11-20 06:45:51.741895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:31.564 06:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:31.825 06:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3063176 00:35:31.825 06:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:35:31.825 06:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:35:32.767 06:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6ab9b1d9-1bd0-4268-a832-2476cf52d762 MY_SNAPSHOT 00:35:33.027 06:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f4315a42-71f8-4a28-8243-280f6a6343d7 00:35:33.027 06:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6ab9b1d9-1bd0-4268-a832-2476cf52d762 30 00:35:33.285 06:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f4315a42-71f8-4a28-8243-280f6a6343d7 MY_CLONE 00:35:33.544 06:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2c864350-5e00-4f7f-a178-97a5a4483587 00:35:33.544 06:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2c864350-5e00-4f7f-a178-97a5a4483587 00:35:34.113 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3063176 00:35:42.251 Initializing NVMe Controllers 00:35:42.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:42.251 Controller IO queue size 128, less than required. 00:35:42.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:42.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:35:42.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:35:42.251 Initialization complete. Launching workers. 00:35:42.251 ======================================================== 00:35:42.251 Latency(us) 00:35:42.251 Device Information : IOPS MiB/s Average min max 00:35:42.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15389.56 60.12 8318.72 1314.68 91210.11 00:35:42.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15199.06 59.37 8422.06 1321.56 92528.13 00:35:42.251 ======================================================== 00:35:42.251 Total : 30588.62 119.49 8370.07 1314.68 92528.13 00:35:42.251 00:35:42.251 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:42.251 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6ab9b1d9-1bd0-4268-a832-2476cf52d762 00:35:42.511 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b899028e-f44a-4112-ac3b-7685609b87df 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:42.771 rmmod nvme_tcp 00:35:42.771 rmmod nvme_fabrics 00:35:42.771 rmmod nvme_keyring 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3062773 ']' 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3062773 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3062773 ']' 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3062773 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3062773 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3062773' 00:35:42.771 killing process with pid 3062773 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3062773 00:35:42.771 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3062773 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.031 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.941 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:44.941 00:35:44.941 real 0m23.771s 00:35:44.941 user 0m55.913s 00:35:44.941 sys 0m10.650s 00:35:44.941 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:44.941 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:44.941 ************************************ 00:35:44.941 END TEST nvmf_lvol 00:35:44.941 ************************************ 00:35:44.941 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:44.941 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:44.941 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:44.941 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:45.202 ************************************ 00:35:45.202 START TEST nvmf_lvs_grow 00:35:45.202 ************************************ 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:45.202 * Looking for test storage... 00:35:45.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:45.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.202 --rc genhtml_branch_coverage=1 00:35:45.202 --rc genhtml_function_coverage=1 00:35:45.202 --rc genhtml_legend=1 00:35:45.202 --rc geninfo_all_blocks=1 00:35:45.202 --rc geninfo_unexecuted_blocks=1 00:35:45.202 00:35:45.202 ' 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:45.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.202 --rc genhtml_branch_coverage=1 00:35:45.202 --rc genhtml_function_coverage=1 00:35:45.202 --rc genhtml_legend=1 00:35:45.202 --rc geninfo_all_blocks=1 00:35:45.202 --rc geninfo_unexecuted_blocks=1 00:35:45.202 00:35:45.202 ' 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:45.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.202 --rc genhtml_branch_coverage=1 00:35:45.202 --rc genhtml_function_coverage=1 00:35:45.202 --rc genhtml_legend=1 00:35:45.202 --rc geninfo_all_blocks=1 00:35:45.202 --rc geninfo_unexecuted_blocks=1 00:35:45.202 00:35:45.202 ' 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:45.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.202 --rc genhtml_branch_coverage=1 00:35:45.202 --rc genhtml_function_coverage=1 00:35:45.202 --rc genhtml_legend=1 00:35:45.202 --rc geninfo_all_blocks=1 00:35:45.202 --rc geninfo_unexecuted_blocks=1 00:35:45.202 00:35:45.202 ' 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.202 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:45.203 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:45.463 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:45.463 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:45.463 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:45.463 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.463 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:45.463 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.463 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:45.463 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:45.463 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:35:45.463 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:53.769 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.769 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:53.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:53.770 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:53.770 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:53.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:53.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:35:53.770 00:35:53.770 --- 10.0.0.2 ping statistics --- 00:35:53.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.770 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:53.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:53.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:35:53.770 00:35:53.770 --- 10.0.0.1 ping statistics --- 00:35:53.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.770 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3069486 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3069486 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3069486 ']' 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:53.770 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:53.770 [2024-11-20 06:46:12.988153] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:53.770 [2024-11-20 06:46:12.989292] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:35:53.770 [2024-11-20 06:46:12.989345] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:53.770 [2024-11-20 06:46:13.085846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.770 [2024-11-20 06:46:13.136522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:53.770 [2024-11-20 06:46:13.136571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:53.770 [2024-11-20 06:46:13.136585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:53.770 [2024-11-20 06:46:13.136592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:53.770 [2024-11-20 06:46:13.136598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:53.770 [2024-11-20 06:46:13.137397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.770 [2024-11-20 06:46:13.214802] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:53.771 [2024-11-20 06:46:13.215096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:53.771 06:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:53.771 06:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:35:53.771 06:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:53.771 06:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:53.771 06:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:53.771 06:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:53.771 06:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:53.771 [2024-11-20 06:46:14.018310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:54.032 ************************************ 00:35:54.032 START TEST lvs_grow_clean 00:35:54.032 ************************************ 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:54.032 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:54.292 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:54.292 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:54.292 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=637bdc4b-f652-4221-a4eb-73be21ebde84 00:35:54.292 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:35:54.292 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:54.553 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:54.553 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:54.553 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 637bdc4b-f652-4221-a4eb-73be21ebde84 lvol 150 00:35:54.813 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cc779722-45d9-4a8a-928f-28b728680bb5 00:35:54.813 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:54.813 06:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:54.813 [2024-11-20 06:46:15.057948] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:54.813 [2024-11-20 06:46:15.058111] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:54.813 true 00:35:54.813 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:54.813 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:35:55.073 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:55.073 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:55.333 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cc779722-45d9-4a8a-928f-28b728680bb5 00:35:55.594 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:55.594 [2024-11-20 06:46:15.798637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.594 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:55.854 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3070171 00:35:55.854 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:55.854 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:55.854 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3070171 /var/tmp/bdevperf.sock 00:35:55.854 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3070171 ']' 00:35:55.854 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:55.854 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:55.854 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:55.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:55.854 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:55.854 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:55.854 [2024-11-20 06:46:16.051166] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:35:55.854 [2024-11-20 06:46:16.051243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070171 ] 00:35:56.115 [2024-11-20 06:46:16.143164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.115 [2024-11-20 06:46:16.195271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.685 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:56.685 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:35:56.685 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:57.256 Nvme0n1 00:35:57.256 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:57.256 [ 00:35:57.256 { 00:35:57.256 "name": "Nvme0n1", 00:35:57.256 "aliases": [ 00:35:57.256 "cc779722-45d9-4a8a-928f-28b728680bb5" 00:35:57.256 ], 00:35:57.256 "product_name": "NVMe disk", 00:35:57.256 "block_size": 4096, 00:35:57.256 "num_blocks": 38912, 00:35:57.256 "uuid": "cc779722-45d9-4a8a-928f-28b728680bb5", 00:35:57.256 "numa_id": 0, 00:35:57.256 "assigned_rate_limits": { 00:35:57.256 "rw_ios_per_sec": 0, 00:35:57.256 "rw_mbytes_per_sec": 0, 00:35:57.256 "r_mbytes_per_sec": 0, 00:35:57.256 "w_mbytes_per_sec": 0 00:35:57.256 }, 00:35:57.256 "claimed": false, 00:35:57.257 "zoned": false, 00:35:57.257 "supported_io_types": { 00:35:57.257 "read": true, 00:35:57.257 "write": true, 00:35:57.257 "unmap": true, 00:35:57.257 "flush": true, 00:35:57.257 "reset": true, 00:35:57.257 "nvme_admin": true, 00:35:57.257 "nvme_io": true, 00:35:57.257 "nvme_io_md": false, 00:35:57.257 "write_zeroes": true, 00:35:57.257 "zcopy": false, 00:35:57.257 "get_zone_info": false, 00:35:57.257 "zone_management": false, 00:35:57.257 "zone_append": false, 00:35:57.257 "compare": true, 00:35:57.257 "compare_and_write": true, 00:35:57.257 "abort": true, 00:35:57.257 "seek_hole": false, 00:35:57.257 "seek_data": false, 00:35:57.257 "copy": true, 00:35:57.257 "nvme_iov_md": false 00:35:57.257 }, 00:35:57.257 "memory_domains": [ 00:35:57.257 { 00:35:57.257 "dma_device_id": "system", 00:35:57.257 "dma_device_type": 1 00:35:57.257 } 00:35:57.257 ], 00:35:57.257 "driver_specific": { 00:35:57.257 "nvme": [ 00:35:57.257 { 00:35:57.257 "trid": { 00:35:57.257 "trtype": "TCP", 00:35:57.257 "adrfam": "IPv4", 00:35:57.257 "traddr": "10.0.0.2", 00:35:57.257 "trsvcid": "4420", 00:35:57.257 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:57.257 }, 00:35:57.257 "ctrlr_data": { 00:35:57.257 "cntlid": 1, 00:35:57.257 "vendor_id": "0x8086", 00:35:57.257 "model_number": "SPDK bdev Controller", 00:35:57.257 "serial_number": "SPDK0", 00:35:57.257 "firmware_revision": "25.01", 00:35:57.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.257 "oacs": { 00:35:57.257 "security": 0, 00:35:57.257 "format": 0, 00:35:57.257 "firmware": 0, 00:35:57.257 "ns_manage": 0 00:35:57.257 }, 00:35:57.257 "multi_ctrlr": true, 00:35:57.257 "ana_reporting": false 00:35:57.257 }, 00:35:57.257 "vs": { 00:35:57.257 "nvme_version": "1.3" 00:35:57.257 }, 00:35:57.257 "ns_data": { 00:35:57.257 "id": 1, 00:35:57.257 "can_share": true 00:35:57.257 } 00:35:57.257 } 00:35:57.257 ], 00:35:57.257 "mp_policy": "active_passive" 00:35:57.257 } 00:35:57.257 } 00:35:57.257 ] 00:35:57.257 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3070388 00:35:57.257 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:57.257 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:57.518 Running I/O for 10 seconds... 00:35:58.458 Latency(us) 00:35:58.459 [2024-11-20T05:46:18.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:58.459 Nvme0n1 : 1.00 16520.00 64.53 0.00 0.00 0.00 0.00 0.00 00:35:58.459 [2024-11-20T05:46:18.738Z] =================================================================================================================== 00:35:58.459 [2024-11-20T05:46:18.738Z] Total : 16520.00 64.53 0.00 0.00 0.00 0.00 0.00 00:35:58.459 00:35:59.398 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:35:59.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:59.398 Nvme0n1 : 2.00 16769.00 65.50 0.00 0.00 0.00 0.00 0.00 00:35:59.398 [2024-11-20T05:46:19.677Z] =================================================================================================================== 00:35:59.398 [2024-11-20T05:46:19.677Z] Total : 16769.00 65.50 0.00 0.00 0.00 0.00 0.00 00:35:59.398 00:35:59.660 true 00:35:59.660 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:35:59.660 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:59.921 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:59.921 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:59.921 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3070388 00:36:00.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:00.492 Nvme0n1 : 3.00 16979.00 66.32 0.00 0.00 0.00 0.00 0.00 00:36:00.492 [2024-11-20T05:46:20.771Z] =================================================================================================================== 00:36:00.492 [2024-11-20T05:46:20.771Z] Total : 16979.00 66.32 0.00 0.00 0.00 0.00 0.00 00:36:00.492 00:36:01.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:01.435 Nvme0n1 : 4.00 17703.25 69.15 0.00 0.00 0.00 0.00 0.00 00:36:01.435 [2024-11-20T05:46:21.714Z] =================================================================================================================== 00:36:01.435 [2024-11-20T05:46:21.714Z] Total : 17703.25 69.15 0.00 0.00 0.00 0.00 0.00 00:36:01.435 00:36:02.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:02.376 Nvme0n1 : 5.00 19163.40 74.86 0.00 0.00 0.00 0.00 0.00 00:36:02.376 [2024-11-20T05:46:22.655Z] =================================================================================================================== 00:36:02.376 [2024-11-20T05:46:22.655Z] Total : 19163.40 74.86 0.00 0.00 0.00 0.00 0.00 00:36:02.376 00:36:03.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:03.759 Nvme0n1 : 6.00 20121.00 78.60 0.00 0.00 0.00 0.00 0.00 00:36:03.759 [2024-11-20T05:46:24.038Z] =================================================================================================================== 00:36:03.759 [2024-11-20T05:46:24.038Z] Total : 20121.00 78.60 0.00 0.00 0.00 0.00 0.00 00:36:03.759 00:36:04.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:04.699 Nvme0n1 : 7.00 20820.71 81.33 0.00 0.00 0.00 0.00 0.00 00:36:04.699 [2024-11-20T05:46:24.978Z] =================================================================================================================== 00:36:04.699 [2024-11-20T05:46:24.978Z] Total : 20820.71 81.33 0.00 0.00 0.00 0.00 0.00 00:36:04.699 00:36:05.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:05.639 Nvme0n1 : 8.00 21345.50 83.38 0.00 0.00 0.00 0.00 0.00 00:36:05.639 [2024-11-20T05:46:25.918Z] =================================================================================================================== 00:36:05.639 [2024-11-20T05:46:25.918Z] Total : 21345.50 83.38 0.00 0.00 0.00 0.00 0.00 00:36:05.639 00:36:06.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:06.579 Nvme0n1 : 9.00 21753.67 84.98 0.00 0.00 0.00 0.00 0.00 00:36:06.579 [2024-11-20T05:46:26.858Z] =================================================================================================================== 00:36:06.579 [2024-11-20T05:46:26.858Z] Total : 21753.67 84.98 0.00 0.00 0.00 0.00 0.00 00:36:06.579 00:36:07.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:07.519 Nvme0n1 : 10.00 22086.80 86.28 0.00 0.00 0.00 0.00 0.00 00:36:07.519 [2024-11-20T05:46:27.798Z] =================================================================================================================== 00:36:07.519 [2024-11-20T05:46:27.798Z] Total : 22086.80 86.28 0.00 0.00 0.00 0.00 0.00 00:36:07.519 00:36:07.519 00:36:07.519 Latency(us) 00:36:07.519 [2024-11-20T05:46:27.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:07.519 Nvme0n1 : 10.01 22088.39 86.28 0.00 0.00 5791.84 2880.85 29709.65 00:36:07.519 [2024-11-20T05:46:27.798Z] =================================================================================================================== 00:36:07.519 [2024-11-20T05:46:27.798Z] Total : 22088.39 86.28 0.00 0.00 5791.84 2880.85 29709.65 00:36:07.519 { 00:36:07.519 "results": [ 00:36:07.519 { 00:36:07.519 "job": "Nvme0n1", 00:36:07.519 "core_mask": "0x2", 00:36:07.519 "workload": "randwrite", 00:36:07.519 "status": "finished", 00:36:07.519 "queue_depth": 128, 00:36:07.519 "io_size": 4096, 00:36:07.519 "runtime": 10.005073, 00:36:07.519 "iops": 22088.39455744101, 00:36:07.519 "mibps": 86.28279124000395, 00:36:07.519 "io_failed": 0, 00:36:07.519 "io_timeout": 0, 00:36:07.519 "avg_latency_us": 5791.844690522303, 00:36:07.519 "min_latency_us": 2880.8533333333335, 00:36:07.519 "max_latency_us": 29709.653333333332 00:36:07.519 } 00:36:07.519 ], 00:36:07.519 "core_count": 1 00:36:07.519 } 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3070171 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3070171 ']' 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3070171 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3070171 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3070171' 00:36:07.519 killing process with pid 3070171 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3070171 00:36:07.519 Received shutdown signal, test time was about 10.000000 seconds 00:36:07.519 00:36:07.519 Latency(us) 00:36:07.519 [2024-11-20T05:46:27.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.519 [2024-11-20T05:46:27.798Z] =================================================================================================================== 00:36:07.519 [2024-11-20T05:46:27.798Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:07.519 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3070171 00:36:07.779 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:07.779 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:08.039 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:36:08.039 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:08.299 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:08.299 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:36:08.299 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:08.299 [2024-11-20 06:46:28.534002] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:36:08.559 request: 00:36:08.559 { 00:36:08.559 "uuid": "637bdc4b-f652-4221-a4eb-73be21ebde84", 00:36:08.559 "method": "bdev_lvol_get_lvstores", 00:36:08.559 "req_id": 1 00:36:08.559 } 00:36:08.559 Got JSON-RPC error response 00:36:08.559 response: 00:36:08.559 { 00:36:08.559 "code": -19, 00:36:08.559 "message": "No such device" 00:36:08.559 } 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.559 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:08.820 aio_bdev 00:36:08.820 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cc779722-45d9-4a8a-928f-28b728680bb5 00:36:08.820 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=cc779722-45d9-4a8a-928f-28b728680bb5 00:36:08.820 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:36:08.820 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:36:08.820 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:36:08.820 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:36:08.820 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:09.088 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cc779722-45d9-4a8a-928f-28b728680bb5 -t 2000 00:36:09.088 [ 00:36:09.088 { 00:36:09.088 "name": "cc779722-45d9-4a8a-928f-28b728680bb5", 00:36:09.088 "aliases": [ 00:36:09.088 "lvs/lvol" 00:36:09.088 ], 00:36:09.088 "product_name": "Logical Volume", 00:36:09.088 "block_size": 4096, 00:36:09.088 "num_blocks": 38912, 00:36:09.088 "uuid": "cc779722-45d9-4a8a-928f-28b728680bb5", 00:36:09.088 "assigned_rate_limits": { 00:36:09.088 "rw_ios_per_sec": 0, 00:36:09.088 "rw_mbytes_per_sec": 0, 00:36:09.088 "r_mbytes_per_sec": 0, 00:36:09.088 "w_mbytes_per_sec": 0 00:36:09.088 }, 00:36:09.088 "claimed": false, 00:36:09.088 "zoned": false, 00:36:09.088 "supported_io_types": { 00:36:09.088 "read": true, 00:36:09.088 "write": true, 00:36:09.088 "unmap": true, 00:36:09.088 "flush": false, 00:36:09.088 "reset": true, 00:36:09.088 "nvme_admin": false, 00:36:09.088 "nvme_io": false, 00:36:09.088 "nvme_io_md": false, 00:36:09.088 "write_zeroes": true, 00:36:09.088 "zcopy": false, 00:36:09.088 "get_zone_info": false, 00:36:09.088 "zone_management": false, 00:36:09.088 "zone_append": false, 00:36:09.088 "compare": false, 00:36:09.088 "compare_and_write": false, 00:36:09.088 "abort": false, 00:36:09.088 "seek_hole": true, 00:36:09.088 "seek_data": true, 00:36:09.088 "copy": false, 00:36:09.088 "nvme_iov_md": false 00:36:09.088 }, 00:36:09.088 "driver_specific": { 00:36:09.088 "lvol": { 00:36:09.088 "lvol_store_uuid": "637bdc4b-f652-4221-a4eb-73be21ebde84", 00:36:09.088 "base_bdev": "aio_bdev", 00:36:09.088 "thin_provision": false, 00:36:09.088 "num_allocated_clusters": 38, 00:36:09.088 "snapshot": false, 00:36:09.088 "clone": false, 00:36:09.088 "esnap_clone": false 00:36:09.088 } 00:36:09.088 } 00:36:09.088 } 00:36:09.088 ] 00:36:09.088 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:36:09.088 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:36:09.088 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:09.351 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:09.351 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:36:09.351 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:09.611 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:09.611 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cc779722-45d9-4a8a-928f-28b728680bb5 00:36:09.611 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 637bdc4b-f652-4221-a4eb-73be21ebde84 00:36:09.871 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:10.132 00:36:10.132 real 0m16.185s 00:36:10.132 user 0m15.872s 00:36:10.132 sys 0m1.436s 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:10.132 ************************************ 00:36:10.132 END TEST lvs_grow_clean 00:36:10.132 ************************************ 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:10.132 ************************************ 00:36:10.132 START TEST lvs_grow_dirty 00:36:10.132 ************************************ 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:10.132 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:10.133 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:10.133 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:10.133 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:10.133 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:10.133 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:10.392 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:10.392 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:10.652 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ce6755e9-819f-4744-b38d-111300494c58 00:36:10.652 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:10.652 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:10.912 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:10.912 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:10.912 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ce6755e9-819f-4744-b38d-111300494c58 lvol 150 00:36:10.912 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d 00:36:10.912 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:10.912 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:11.173 [2024-11-20 06:46:31.253923] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:11.173 [2024-11-20 06:46:31.254088] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:11.173 true 00:36:11.173 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:11.173 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:11.173 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:11.173 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:11.433 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d 00:36:11.693 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:11.693 [2024-11-20 06:46:31.930473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.693 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:11.953 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3073235 00:36:11.953 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:11.953 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:11.953 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3073235 /var/tmp/bdevperf.sock 00:36:11.953 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3073235 ']' 00:36:11.953 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:11.953 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:11.953 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:11.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:11.953 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:11.953 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:11.953 [2024-11-20 06:46:32.181066] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:36:11.953 [2024-11-20 06:46:32.181131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073235 ] 00:36:12.213 [2024-11-20 06:46:32.267544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.213 [2024-11-20 06:46:32.298340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.782 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:12.782 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:36:12.783 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:13.042 Nvme0n1 00:36:13.042 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:13.302 [ 00:36:13.302 { 00:36:13.302 "name": "Nvme0n1", 00:36:13.302 "aliases": [ 00:36:13.302 "c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d" 00:36:13.302 ], 00:36:13.302 "product_name": "NVMe disk", 00:36:13.302 "block_size": 4096, 00:36:13.302 "num_blocks": 38912, 00:36:13.302 "uuid": "c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d", 00:36:13.302 "numa_id": 0, 00:36:13.302 "assigned_rate_limits": { 00:36:13.302 "rw_ios_per_sec": 0, 00:36:13.302 "rw_mbytes_per_sec": 0, 00:36:13.302 "r_mbytes_per_sec": 0, 00:36:13.302 "w_mbytes_per_sec": 0 00:36:13.302 }, 00:36:13.302 "claimed": false, 00:36:13.302 "zoned": false, 00:36:13.302 "supported_io_types": { 00:36:13.302 "read": true, 00:36:13.302 "write": true, 00:36:13.302 "unmap": true, 00:36:13.302 "flush": true, 00:36:13.302 "reset": true, 00:36:13.302 "nvme_admin": true, 00:36:13.302 "nvme_io": true, 00:36:13.302 "nvme_io_md": false, 00:36:13.302 "write_zeroes": true, 00:36:13.302 "zcopy": false, 00:36:13.302 "get_zone_info": false, 00:36:13.302 "zone_management": false, 00:36:13.303 "zone_append": false, 00:36:13.303 "compare": true, 00:36:13.303 "compare_and_write": true, 00:36:13.303 "abort": true, 00:36:13.303 "seek_hole": false, 00:36:13.303 "seek_data": false, 00:36:13.303 "copy": true, 00:36:13.303 "nvme_iov_md": false 00:36:13.303 }, 00:36:13.303 "memory_domains": [ 00:36:13.303 { 00:36:13.303 "dma_device_id": "system", 00:36:13.303 "dma_device_type": 1 00:36:13.303 } 00:36:13.303 ], 00:36:13.303 "driver_specific": { 00:36:13.303 "nvme": [ 00:36:13.303 { 00:36:13.303 "trid": { 00:36:13.303 "trtype": "TCP", 00:36:13.303 "adrfam": "IPv4", 00:36:13.303 "traddr": "10.0.0.2", 00:36:13.303 "trsvcid": "4420", 00:36:13.303 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:13.303 }, 00:36:13.303 "ctrlr_data": { 00:36:13.303 "cntlid": 1, 00:36:13.303 "vendor_id": "0x8086", 00:36:13.303 "model_number": "SPDK bdev Controller", 00:36:13.303 "serial_number": "SPDK0", 00:36:13.303 "firmware_revision": "25.01", 00:36:13.303 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:13.303 "oacs": { 00:36:13.303 "security": 0, 00:36:13.303 "format": 0, 00:36:13.303 "firmware": 0, 00:36:13.303 "ns_manage": 0 00:36:13.303 }, 00:36:13.303 "multi_ctrlr": true, 00:36:13.303 "ana_reporting": false 00:36:13.303 }, 00:36:13.303 "vs": { 00:36:13.303 "nvme_version": "1.3" 00:36:13.303 }, 00:36:13.303 "ns_data": { 00:36:13.303 "id": 1, 00:36:13.303 "can_share": true 00:36:13.303 } 00:36:13.303 } 00:36:13.303 ], 00:36:13.303 "mp_policy": "active_passive" 00:36:13.303 } 00:36:13.303 } 00:36:13.303 ] 00:36:13.303 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3073321 00:36:13.303 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:13.303 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:13.303 Running I/O for 10 seconds... 00:36:14.243 Latency(us) 00:36:14.243 [2024-11-20T05:46:34.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:14.244 Nvme0n1 : 1.00 17052.00 66.61 0.00 0.00 0.00 0.00 0.00 00:36:14.244 [2024-11-20T05:46:34.523Z] =================================================================================================================== 00:36:14.244 [2024-11-20T05:46:34.523Z] Total : 17052.00 66.61 0.00 0.00 0.00 0.00 0.00 00:36:14.244 00:36:15.184 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ce6755e9-819f-4744-b38d-111300494c58 00:36:15.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:15.184 Nvme0n1 : 2.00 17321.00 67.66 0.00 0.00 0.00 0.00 0.00 00:36:15.184 [2024-11-20T05:46:35.463Z] =================================================================================================================== 00:36:15.184 [2024-11-20T05:46:35.463Z] Total : 17321.00 67.66 0.00 0.00 0.00 0.00 0.00 00:36:15.184 00:36:15.444 true 00:36:15.444 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:15.444 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:15.444 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:15.444 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:15.444 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3073321 00:36:16.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:16.386 Nvme0n1 : 3.00 17410.33 68.01 0.00 0.00 0.00 0.00 0.00 00:36:16.386 [2024-11-20T05:46:36.665Z] =================================================================================================================== 00:36:16.386 [2024-11-20T05:46:36.665Z] Total : 17410.33 68.01 0.00 0.00 0.00 0.00 0.00 00:36:16.386 00:36:17.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:17.326 Nvme0n1 : 4.00 17471.00 68.25 0.00 0.00 0.00 0.00 0.00 00:36:17.326 [2024-11-20T05:46:37.605Z] =================================================================================================================== 00:36:17.326 [2024-11-20T05:46:37.605Z] Total : 17471.00 68.25 0.00 0.00 0.00 0.00 0.00 00:36:17.326 00:36:18.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:18.265 Nvme0n1 : 5.00 18485.60 72.21 0.00 0.00 0.00 0.00 0.00 00:36:18.265 [2024-11-20T05:46:38.544Z] =================================================================================================================== 00:36:18.265 [2024-11-20T05:46:38.544Z] Total : 18485.60 72.21 0.00 0.00 0.00 0.00 0.00 00:36:18.265 00:36:19.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:19.204 Nvme0n1 : 6.00 19550.83 76.37 0.00 0.00 0.00 0.00 0.00 00:36:19.204 [2024-11-20T05:46:39.483Z] =================================================================================================================== 00:36:19.204 [2024-11-20T05:46:39.483Z] Total : 19550.83 76.37 0.00 0.00 0.00 0.00 0.00 00:36:19.204 00:36:20.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:20.585 Nvme0n1 : 7.00 20313.86 79.35 0.00 0.00 0.00 0.00 0.00 00:36:20.585 [2024-11-20T05:46:40.864Z] =================================================================================================================== 00:36:20.585 [2024-11-20T05:46:40.864Z] Total : 20313.86 79.35 0.00 0.00 0.00 0.00 0.00 00:36:20.585 00:36:21.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:21.524 Nvme0n1 : 8.00 20894.12 81.62 0.00 0.00 0.00 0.00 0.00 00:36:21.524 [2024-11-20T05:46:41.803Z] =================================================================================================================== 00:36:21.524 [2024-11-20T05:46:41.803Z] Total : 20894.12 81.62 0.00 0.00 0.00 0.00 0.00 00:36:21.524 00:36:22.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:22.463 Nvme0n1 : 9.00 21349.11 83.39 0.00 0.00 0.00 0.00 0.00 00:36:22.463 [2024-11-20T05:46:42.742Z] =================================================================================================================== 00:36:22.463 [2024-11-20T05:46:42.742Z] Total : 21349.11 83.39 0.00 0.00 0.00 0.00 0.00 00:36:22.463 00:36:23.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:23.402 Nvme0n1 : 10.00 21709.80 84.80 0.00 0.00 0.00 0.00 0.00 00:36:23.402 [2024-11-20T05:46:43.681Z] =================================================================================================================== 00:36:23.402 [2024-11-20T05:46:43.681Z] Total : 21709.80 84.80 0.00 0.00 0.00 0.00 0.00 00:36:23.402 00:36:23.402 00:36:23.402 Latency(us) 00:36:23.402 [2024-11-20T05:46:43.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:23.402 Nvme0n1 : 10.00 21710.43 84.81 0.00 0.00 5892.51 2880.85 31457.28 00:36:23.402 [2024-11-20T05:46:43.681Z] =================================================================================================================== 00:36:23.402 [2024-11-20T05:46:43.681Z] Total : 21710.43 84.81 0.00 0.00 5892.51 2880.85 31457.28 00:36:23.402 { 00:36:23.402 "results": [ 00:36:23.402 { 00:36:23.402 "job": "Nvme0n1", 00:36:23.402 "core_mask": "0x2", 00:36:23.402 "workload": "randwrite", 00:36:23.402 "status": "finished", 00:36:23.402 "queue_depth": 128, 00:36:23.402 "io_size": 4096, 00:36:23.402 "runtime": 10.002657, 00:36:23.402 "iops": 21710.431538340265, 00:36:23.402 "mibps": 84.80637319664166, 00:36:23.402 "io_failed": 0, 00:36:23.402 "io_timeout": 0, 00:36:23.402 "avg_latency_us": 5892.510569375243, 00:36:23.402 "min_latency_us": 2880.8533333333335, 00:36:23.402 "max_latency_us": 31457.28 00:36:23.402 } 00:36:23.402 ], 00:36:23.402 "core_count": 1 00:36:23.402 } 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3073235 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3073235 ']' 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3073235 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3073235 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3073235' 00:36:23.402 killing process with pid 3073235 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3073235 00:36:23.402 Received shutdown signal, test time was about 10.000000 seconds 00:36:23.402 00:36:23.402 Latency(us) 00:36:23.402 [2024-11-20T05:46:43.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.402 [2024-11-20T05:46:43.681Z] =================================================================================================================== 00:36:23.402 [2024-11-20T05:46:43.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3073235 00:36:23.402 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:23.663 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:23.923 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:23.923 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:23.923 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:23.923 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:36:23.923 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3069486 00:36:23.923 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3069486 00:36:24.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3069486 Killed "${NVMF_APP[@]}" "$@" 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3075493 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3075493 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3075493 ']' 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:24.183 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:24.183 [2024-11-20 06:46:44.286016] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:24.183 [2024-11-20 06:46:44.287146] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:36:24.183 [2024-11-20 06:46:44.287225] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.183 [2024-11-20 06:46:44.380278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.183 [2024-11-20 06:46:44.414286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.184 [2024-11-20 06:46:44.414317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.184 [2024-11-20 06:46:44.414323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.184 [2024-11-20 06:46:44.414327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.184 [2024-11-20 06:46:44.414332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.184 [2024-11-20 06:46:44.414796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.444 [2024-11-20 06:46:44.467829] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:24.444 [2024-11-20 06:46:44.468022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:25.014 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:25.014 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:36:25.014 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:25.014 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:25.014 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:25.014 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:25.014 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:25.274 [2024-11-20 06:46:45.305085] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:36:25.274 [2024-11-20 06:46:45.305341] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:36:25.274 [2024-11-20 06:46:45.305435] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:36:25.274 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:36:25.274 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d 00:36:25.274 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d 00:36:25.274 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:36:25.274 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:36:25.274 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:36:25.274 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:36:25.274 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:25.274 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d -t 2000 00:36:25.534 [ 00:36:25.534 { 00:36:25.534 "name": "c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d", 00:36:25.534 "aliases": [ 00:36:25.534 "lvs/lvol" 00:36:25.534 ], 00:36:25.534 "product_name": "Logical Volume", 00:36:25.534 "block_size": 4096, 00:36:25.534 "num_blocks": 38912, 00:36:25.534 "uuid": "c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d", 00:36:25.534 "assigned_rate_limits": { 00:36:25.534 "rw_ios_per_sec": 0, 00:36:25.534 "rw_mbytes_per_sec": 0, 00:36:25.534 "r_mbytes_per_sec": 0, 00:36:25.534 "w_mbytes_per_sec": 0 00:36:25.534 }, 00:36:25.534 "claimed": false, 00:36:25.534 "zoned": false, 00:36:25.534 "supported_io_types": { 00:36:25.534 "read": true, 00:36:25.534 "write": true, 00:36:25.534 "unmap": true, 00:36:25.534 "flush": false, 00:36:25.534 "reset": true, 00:36:25.534 "nvme_admin": false, 00:36:25.534 "nvme_io": false, 00:36:25.534 "nvme_io_md": false, 00:36:25.534 "write_zeroes": true, 00:36:25.534 "zcopy": false, 00:36:25.534 "get_zone_info": false, 00:36:25.534 "zone_management": false, 00:36:25.534 "zone_append": false, 00:36:25.534 "compare": false, 00:36:25.534 "compare_and_write": false, 00:36:25.534 "abort": false, 00:36:25.534 "seek_hole": true, 00:36:25.534 "seek_data": true, 00:36:25.534 "copy": false, 00:36:25.534 "nvme_iov_md": false 00:36:25.534 }, 00:36:25.534 "driver_specific": { 00:36:25.534 "lvol": { 00:36:25.534 "lvol_store_uuid": "ce6755e9-819f-4744-b38d-111300494c58", 00:36:25.534 "base_bdev": "aio_bdev", 00:36:25.534 "thin_provision": false, 00:36:25.534 "num_allocated_clusters": 38, 00:36:25.534 "snapshot": false, 00:36:25.534 "clone": false, 00:36:25.534 "esnap_clone": false 00:36:25.534 } 00:36:25.534 } 00:36:25.534 } 00:36:25.534 ] 00:36:25.534 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:36:25.534 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:25.534 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:36:25.795 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:36:25.795 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:25.795 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:26.056 [2024-11-20 06:46:46.247371] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:26.056 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:26.317 request: 00:36:26.317 { 00:36:26.317 "uuid": "ce6755e9-819f-4744-b38d-111300494c58", 00:36:26.317 "method": "bdev_lvol_get_lvstores", 00:36:26.317 "req_id": 1 00:36:26.317 } 00:36:26.317 Got JSON-RPC error response 00:36:26.317 response: 00:36:26.317 { 00:36:26.317 "code": -19, 00:36:26.317 "message": "No such device" 00:36:26.317 } 00:36:26.317 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:36:26.317 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:26.317 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:26.317 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:26.317 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:26.577 aio_bdev 00:36:26.577 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d 00:36:26.577 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d 00:36:26.577 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:36:26.577 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:36:26.577 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:36:26.577 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:36:26.577 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:26.577 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d -t 2000 00:36:26.837 [ 00:36:26.837 { 00:36:26.837 "name": "c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d", 00:36:26.837 "aliases": [ 00:36:26.837 "lvs/lvol" 00:36:26.837 ], 00:36:26.837 "product_name": "Logical Volume", 00:36:26.837 "block_size": 4096, 00:36:26.837 "num_blocks": 38912, 00:36:26.837 "uuid": "c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d", 00:36:26.837 "assigned_rate_limits": { 00:36:26.837 "rw_ios_per_sec": 0, 00:36:26.837 "rw_mbytes_per_sec": 0, 00:36:26.837 "r_mbytes_per_sec": 0, 00:36:26.837 "w_mbytes_per_sec": 0 00:36:26.837 }, 00:36:26.837 "claimed": false, 00:36:26.837 "zoned": false, 00:36:26.837 "supported_io_types": { 00:36:26.837 "read": true, 00:36:26.837 "write": true, 00:36:26.837 "unmap": true, 00:36:26.837 "flush": false, 00:36:26.837 "reset": true, 00:36:26.837 "nvme_admin": false, 00:36:26.837 "nvme_io": false, 00:36:26.837 "nvme_io_md": false, 00:36:26.837 "write_zeroes": true, 00:36:26.837 "zcopy": false, 00:36:26.837 "get_zone_info": false, 00:36:26.837 "zone_management": false, 00:36:26.837 "zone_append": false, 00:36:26.837 "compare": false, 00:36:26.837 "compare_and_write": false, 00:36:26.837 "abort": false, 00:36:26.837 "seek_hole": true, 00:36:26.837 "seek_data": true, 00:36:26.837 "copy": false, 00:36:26.837 "nvme_iov_md": false 00:36:26.837 }, 00:36:26.837 "driver_specific": { 00:36:26.837 "lvol": { 00:36:26.837 "lvol_store_uuid": "ce6755e9-819f-4744-b38d-111300494c58", 00:36:26.837 "base_bdev": "aio_bdev", 00:36:26.837 "thin_provision": false, 00:36:26.837 "num_allocated_clusters": 38, 00:36:26.837 "snapshot": false, 00:36:26.837 "clone": false, 00:36:26.837 "esnap_clone": false 00:36:26.837 } 00:36:26.837 } 00:36:26.838 } 00:36:26.838 ] 00:36:26.838 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:36:26.838 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:26.838 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:27.098 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:27.098 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce6755e9-819f-4744-b38d-111300494c58 00:36:27.098 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:27.358 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:27.358 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c15d3d6d-31dc-42a6-a0a9-f9e8e15a233d 00:36:27.358 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce6755e9-819f-4744-b38d-111300494c58 00:36:27.619 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:27.879 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:27.879 00:36:27.879 real 0m17.586s 00:36:27.879 user 0m35.502s 00:36:27.879 sys 0m3.018s 00:36:27.879 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:27.879 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:27.879 ************************************ 00:36:27.879 END TEST lvs_grow_dirty 00:36:27.879 ************************************ 00:36:27.879 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:36:27.879 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:36:27.879 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:36:27.879 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:36:27.879 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:27.879 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:36:27.880 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:27.880 nvmf_trace.0 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:27.880 rmmod nvme_tcp 00:36:27.880 rmmod nvme_fabrics 00:36:27.880 rmmod nvme_keyring 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3075493 ']' 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3075493 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3075493 ']' 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3075493 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:27.880 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3075493 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3075493' 00:36:28.140 killing process with pid 3075493 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3075493 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3075493 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:28.140 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:30.683 00:36:30.683 real 0m45.179s 00:36:30.683 user 0m54.358s 00:36:30.683 sys 0m10.630s 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:30.683 ************************************ 00:36:30.683 END TEST nvmf_lvs_grow 00:36:30.683 ************************************ 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:30.683 ************************************ 00:36:30.683 START TEST nvmf_bdev_io_wait 00:36:30.683 ************************************ 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:30.683 * Looking for test storage... 00:36:30.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:36:30.683 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:30.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.684 --rc genhtml_branch_coverage=1 00:36:30.684 --rc genhtml_function_coverage=1 00:36:30.684 --rc genhtml_legend=1 00:36:30.684 --rc geninfo_all_blocks=1 00:36:30.684 --rc geninfo_unexecuted_blocks=1 00:36:30.684 00:36:30.684 ' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:30.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.684 --rc genhtml_branch_coverage=1 00:36:30.684 --rc genhtml_function_coverage=1 00:36:30.684 --rc genhtml_legend=1 00:36:30.684 --rc geninfo_all_blocks=1 00:36:30.684 --rc geninfo_unexecuted_blocks=1 00:36:30.684 00:36:30.684 ' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:30.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.684 --rc genhtml_branch_coverage=1 00:36:30.684 --rc genhtml_function_coverage=1 00:36:30.684 --rc genhtml_legend=1 00:36:30.684 --rc geninfo_all_blocks=1 00:36:30.684 --rc geninfo_unexecuted_blocks=1 00:36:30.684 00:36:30.684 ' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:30.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.684 --rc genhtml_branch_coverage=1 00:36:30.684 --rc genhtml_function_coverage=1 00:36:30.684 --rc genhtml_legend=1 00:36:30.684 --rc geninfo_all_blocks=1 00:36:30.684 --rc geninfo_unexecuted_blocks=1 00:36:30.684 00:36:30.684 ' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:36:30.684 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:38.817 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:38.818 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:38.818 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:38.818 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:38.818 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:38.818 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:38.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:38.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:36:38.818 00:36:38.818 --- 10.0.0.2 ping statistics --- 00:36:38.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.818 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:38.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:38.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:36:38.818 00:36:38.818 --- 10.0.0.1 ping statistics --- 00:36:38.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.818 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3080372 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3080372 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3080372 ']' 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:38.818 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:38.819 [2024-11-20 06:46:58.311495] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:38.819 [2024-11-20 06:46:58.312611] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:36:38.819 [2024-11-20 06:46:58.312662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:38.819 [2024-11-20 06:46:58.413097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:38.819 [2024-11-20 06:46:58.467585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:38.819 [2024-11-20 06:46:58.467635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:38.819 [2024-11-20 06:46:58.467643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:38.819 [2024-11-20 06:46:58.467651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:38.819 [2024-11-20 06:46:58.467661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:38.819 [2024-11-20 06:46:58.469958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.819 [2024-11-20 06:46:58.470109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:38.819 [2024-11-20 06:46:58.470273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.819 [2024-11-20 06:46:58.470273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:38.819 [2024-11-20 06:46:58.470622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:39.079 [2024-11-20 06:46:59.238415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:39.079 [2024-11-20 06:46:59.239109] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:39.079 [2024-11-20 06:46:59.239195] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:39.079 [2024-11-20 06:46:59.239328] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:39.079 [2024-11-20 06:46:59.250852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:39.079 Malloc0 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:39.079 [2024-11-20 06:46:59.323467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3080722 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3080724 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:39.079 { 00:36:39.079 "params": { 00:36:39.079 "name": "Nvme$subsystem", 00:36:39.079 "trtype": "$TEST_TRANSPORT", 00:36:39.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:39.079 "adrfam": "ipv4", 00:36:39.079 "trsvcid": "$NVMF_PORT", 00:36:39.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:39.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:39.079 "hdgst": ${hdgst:-false}, 00:36:39.079 "ddgst": ${ddgst:-false} 00:36:39.079 }, 00:36:39.079 "method": "bdev_nvme_attach_controller" 00:36:39.079 } 00:36:39.079 EOF 00:36:39.079 )") 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3080726 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:39.079 { 00:36:39.079 "params": { 00:36:39.079 "name": "Nvme$subsystem", 00:36:39.079 "trtype": "$TEST_TRANSPORT", 00:36:39.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:39.079 "adrfam": "ipv4", 00:36:39.079 "trsvcid": "$NVMF_PORT", 00:36:39.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:39.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:39.079 "hdgst": ${hdgst:-false}, 00:36:39.079 "ddgst": ${ddgst:-false} 00:36:39.079 }, 00:36:39.079 "method": "bdev_nvme_attach_controller" 00:36:39.079 } 00:36:39.079 EOF 00:36:39.079 )") 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3080729 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:39.079 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:39.080 { 00:36:39.080 "params": { 00:36:39.080 "name": "Nvme$subsystem", 00:36:39.080 "trtype": "$TEST_TRANSPORT", 00:36:39.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:39.080 "adrfam": "ipv4", 00:36:39.080 "trsvcid": "$NVMF_PORT", 00:36:39.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:39.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:39.080 "hdgst": ${hdgst:-false}, 00:36:39.080 "ddgst": ${ddgst:-false} 00:36:39.080 }, 00:36:39.080 "method": "bdev_nvme_attach_controller" 00:36:39.080 } 00:36:39.080 EOF 00:36:39.080 )") 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:39.080 { 00:36:39.080 "params": { 00:36:39.080 "name": "Nvme$subsystem", 00:36:39.080 "trtype": "$TEST_TRANSPORT", 00:36:39.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:39.080 "adrfam": "ipv4", 00:36:39.080 "trsvcid": "$NVMF_PORT", 00:36:39.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:39.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:39.080 "hdgst": ${hdgst:-false}, 00:36:39.080 "ddgst": ${ddgst:-false} 00:36:39.080 }, 00:36:39.080 "method": "bdev_nvme_attach_controller" 00:36:39.080 } 00:36:39.080 EOF 00:36:39.080 )") 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3080722 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:39.080 "params": { 00:36:39.080 "name": "Nvme1", 00:36:39.080 "trtype": "tcp", 00:36:39.080 "traddr": "10.0.0.2", 00:36:39.080 "adrfam": "ipv4", 00:36:39.080 "trsvcid": "4420", 00:36:39.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:39.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:39.080 "hdgst": false, 00:36:39.080 "ddgst": false 00:36:39.080 }, 00:36:39.080 "method": "bdev_nvme_attach_controller" 00:36:39.080 }' 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:39.080 "params": { 00:36:39.080 "name": "Nvme1", 00:36:39.080 "trtype": "tcp", 00:36:39.080 "traddr": "10.0.0.2", 00:36:39.080 "adrfam": "ipv4", 00:36:39.080 "trsvcid": "4420", 00:36:39.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:39.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:39.080 "hdgst": false, 00:36:39.080 "ddgst": false 00:36:39.080 }, 00:36:39.080 "method": "bdev_nvme_attach_controller" 00:36:39.080 }' 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:39.080 "params": { 00:36:39.080 "name": "Nvme1", 00:36:39.080 "trtype": "tcp", 00:36:39.080 "traddr": "10.0.0.2", 00:36:39.080 "adrfam": "ipv4", 00:36:39.080 "trsvcid": "4420", 00:36:39.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:39.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:39.080 "hdgst": false, 00:36:39.080 "ddgst": false 00:36:39.080 }, 00:36:39.080 "method": "bdev_nvme_attach_controller" 00:36:39.080 }' 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:39.080 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:39.080 "params": { 00:36:39.080 "name": "Nvme1", 00:36:39.080 "trtype": "tcp", 00:36:39.080 "traddr": "10.0.0.2", 00:36:39.080 "adrfam": "ipv4", 00:36:39.080 "trsvcid": "4420", 00:36:39.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:39.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:39.080 "hdgst": false, 00:36:39.080 "ddgst": false 00:36:39.080 }, 00:36:39.080 "method": "bdev_nvme_attach_controller" 00:36:39.080 }' 00:36:39.340 [2024-11-20 06:46:59.381023] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:36:39.341 [2024-11-20 06:46:59.381117] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:36:39.341 [2024-11-20 06:46:59.382189] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:36:39.341 [2024-11-20 06:46:59.382251] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:36:39.341 [2024-11-20 06:46:59.386118] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:36:39.341 [2024-11-20 06:46:59.386220] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:36:39.341 [2024-11-20 06:46:59.392913] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:36:39.341 [2024-11-20 06:46:59.393009] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:36:39.341 [2024-11-20 06:46:59.595045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.601 [2024-11-20 06:46:59.633679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:39.601 [2024-11-20 06:46:59.685624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.601 [2024-11-20 06:46:59.725102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:39.601 [2024-11-20 06:46:59.777261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.601 [2024-11-20 06:46:59.822910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:39.601 [2024-11-20 06:46:59.849172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.932 [2024-11-20 06:46:59.885262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:39.932 Running I/O for 1 seconds... 00:36:39.932 Running I/O for 1 seconds... 00:36:39.932 Running I/O for 1 seconds... 00:36:39.932 Running I/O for 1 seconds... 00:36:40.919 10774.00 IOPS, 42.09 MiB/s 00:36:40.919 Latency(us) 00:36:40.919 [2024-11-20T05:47:01.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.919 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:36:40.919 Nvme1n1 : 1.01 10835.38 42.33 0.00 0.00 11766.02 2293.76 13817.17 00:36:40.919 [2024-11-20T05:47:01.198Z] =================================================================================================================== 00:36:40.919 [2024-11-20T05:47:01.198Z] Total : 10835.38 42.33 0.00 0.00 11766.02 2293.76 13817.17 00:36:40.919 9474.00 IOPS, 37.01 MiB/s 00:36:40.919 Latency(us) 00:36:40.919 [2024-11-20T05:47:01.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.919 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:36:40.919 Nvme1n1 : 1.01 9526.60 37.21 0.00 0.00 13380.91 5488.64 16384.00 00:36:40.919 [2024-11-20T05:47:01.198Z] =================================================================================================================== 00:36:40.919 [2024-11-20T05:47:01.198Z] Total : 9526.60 37.21 0.00 0.00 13380.91 5488.64 16384.00 00:36:40.919 11068.00 IOPS, 43.23 MiB/s 00:36:40.919 Latency(us) 00:36:40.919 [2024-11-20T05:47:01.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.919 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:36:40.919 Nvme1n1 : 1.01 11161.46 43.60 0.00 0.00 11434.86 3659.09 19114.67 00:36:40.919 [2024-11-20T05:47:01.198Z] =================================================================================================================== 00:36:40.919 [2024-11-20T05:47:01.198Z] Total : 11161.46 43.60 0.00 0.00 11434.86 3659.09 19114.67 00:36:40.919 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3080724 00:36:40.919 185752.00 IOPS, 725.59 MiB/s 00:36:40.919 Latency(us) 00:36:40.919 [2024-11-20T05:47:01.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.919 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:36:40.919 Nvme1n1 : 1.00 185387.56 724.17 0.00 0.00 686.61 305.49 1966.08 00:36:40.919 [2024-11-20T05:47:01.198Z] =================================================================================================================== 00:36:40.919 [2024-11-20T05:47:01.198Z] Total : 185387.56 724.17 0.00 0.00 686.61 305.49 1966.08 00:36:40.919 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3080726 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3080729 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:41.179 rmmod nvme_tcp 00:36:41.179 rmmod nvme_fabrics 00:36:41.179 rmmod nvme_keyring 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3080372 ']' 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3080372 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3080372 ']' 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3080372 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3080372 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3080372' 00:36:41.179 killing process with pid 3080372 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3080372 00:36:41.179 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3080372 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.439 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:43.982 00:36:43.982 real 0m13.136s 00:36:43.982 user 0m16.199s 00:36:43.982 sys 0m7.679s 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:43.982 ************************************ 00:36:43.982 END TEST nvmf_bdev_io_wait 00:36:43.982 ************************************ 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:43.982 ************************************ 00:36:43.982 START TEST nvmf_queue_depth 00:36:43.982 ************************************ 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:43.982 * Looking for test storage... 00:36:43.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:36:43.982 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:43.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.983 --rc genhtml_branch_coverage=1 00:36:43.983 --rc genhtml_function_coverage=1 00:36:43.983 --rc genhtml_legend=1 00:36:43.983 --rc geninfo_all_blocks=1 00:36:43.983 --rc geninfo_unexecuted_blocks=1 00:36:43.983 00:36:43.983 ' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:43.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.983 --rc genhtml_branch_coverage=1 00:36:43.983 --rc genhtml_function_coverage=1 00:36:43.983 --rc genhtml_legend=1 00:36:43.983 --rc geninfo_all_blocks=1 00:36:43.983 --rc geninfo_unexecuted_blocks=1 00:36:43.983 00:36:43.983 ' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:43.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.983 --rc genhtml_branch_coverage=1 00:36:43.983 --rc genhtml_function_coverage=1 00:36:43.983 --rc genhtml_legend=1 00:36:43.983 --rc geninfo_all_blocks=1 00:36:43.983 --rc geninfo_unexecuted_blocks=1 00:36:43.983 00:36:43.983 ' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:43.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.983 --rc genhtml_branch_coverage=1 00:36:43.983 --rc genhtml_function_coverage=1 00:36:43.983 --rc genhtml_legend=1 00:36:43.983 --rc geninfo_all_blocks=1 00:36:43.983 --rc geninfo_unexecuted_blocks=1 00:36:43.983 00:36:43.983 ' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:43.983 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:43.984 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:43.984 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:43.984 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.984 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:43.984 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.984 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:43.984 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:43.984 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:36:43.984 06:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:36:52.129 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:52.130 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:52.130 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:52.130 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:52.130 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:52.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:52.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:36:52.130 00:36:52.130 --- 10.0.0.2 ping statistics --- 00:36:52.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:52.130 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:52.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:52.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:36:52.130 00:36:52.130 --- 10.0.0.1 ping statistics --- 00:36:52.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:52.130 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:52.130 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3085161 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3085161 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3085161 ']' 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:52.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:52.131 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.131 [2024-11-20 06:47:11.475334] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:52.131 [2024-11-20 06:47:11.476432] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:36:52.131 [2024-11-20 06:47:11.476483] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:52.131 [2024-11-20 06:47:11.580536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.131 [2024-11-20 06:47:11.631224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:52.131 [2024-11-20 06:47:11.631272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:52.131 [2024-11-20 06:47:11.631281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:52.131 [2024-11-20 06:47:11.631288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:52.131 [2024-11-20 06:47:11.631295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:52.131 [2024-11-20 06:47:11.632069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:52.131 [2024-11-20 06:47:11.709808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:52.131 [2024-11-20 06:47:11.710100] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.131 [2024-11-20 06:47:12.332920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.131 Malloc0 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.131 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.391 [2024-11-20 06:47:12.417131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3085445 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3085445 /var/tmp/bdevperf.sock 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3085445 ']' 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:52.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:52.391 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.391 [2024-11-20 06:47:12.475622] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:36:52.391 [2024-11-20 06:47:12.475682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085445 ] 00:36:52.391 [2024-11-20 06:47:12.567083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.391 [2024-11-20 06:47:12.620282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.332 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:53.332 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:36:53.332 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:53.332 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.332 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:53.332 NVMe0n1 00:36:53.332 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.332 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:53.332 Running I/O for 10 seconds... 00:36:55.219 8947.00 IOPS, 34.95 MiB/s [2024-11-20T05:47:16.882Z] 8964.50 IOPS, 35.02 MiB/s [2024-11-20T05:47:17.822Z] 9289.67 IOPS, 36.29 MiB/s [2024-11-20T05:47:18.761Z] 10285.75 IOPS, 40.18 MiB/s [2024-11-20T05:47:19.709Z] 10962.40 IOPS, 42.82 MiB/s [2024-11-20T05:47:20.647Z] 11408.17 IOPS, 44.56 MiB/s [2024-11-20T05:47:21.587Z] 11708.57 IOPS, 45.74 MiB/s [2024-11-20T05:47:22.526Z] 11955.25 IOPS, 46.70 MiB/s [2024-11-20T05:47:23.907Z] 12153.22 IOPS, 47.47 MiB/s [2024-11-20T05:47:23.907Z] 12291.00 IOPS, 48.01 MiB/s 00:37:03.628 Latency(us) 00:37:03.628 [2024-11-20T05:47:23.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.628 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:37:03.628 Verification LBA range: start 0x0 length 0x4000 00:37:03.628 NVMe0n1 : 10.06 12322.17 48.13 0.00 0.00 82841.04 24357.55 71215.79 00:37:03.629 [2024-11-20T05:47:23.908Z] =================================================================================================================== 00:37:03.629 [2024-11-20T05:47:23.908Z] Total : 12322.17 48.13 0.00 0.00 82841.04 24357.55 71215.79 00:37:03.629 { 00:37:03.629 "results": [ 00:37:03.629 { 00:37:03.629 "job": "NVMe0n1", 00:37:03.629 "core_mask": "0x1", 00:37:03.629 "workload": "verify", 00:37:03.629 "status": "finished", 00:37:03.629 "verify_range": { 00:37:03.629 "start": 0, 00:37:03.629 "length": 16384 00:37:03.629 }, 00:37:03.629 "queue_depth": 1024, 00:37:03.629 "io_size": 4096, 00:37:03.629 "runtime": 10.057643, 00:37:03.629 "iops": 12322.1713079297, 00:37:03.629 "mibps": 48.13348167160039, 00:37:03.629 "io_failed": 0, 00:37:03.629 "io_timeout": 0, 00:37:03.629 "avg_latency_us": 82841.04321122335, 00:37:03.629 "min_latency_us": 24357.546666666665, 00:37:03.629 "max_latency_us": 71215.78666666667 00:37:03.629 } 00:37:03.629 ], 00:37:03.629 "core_count": 1 00:37:03.629 } 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3085445 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3085445 ']' 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3085445 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3085445 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3085445' 00:37:03.629 killing process with pid 3085445 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3085445 00:37:03.629 Received shutdown signal, test time was about 10.000000 seconds 00:37:03.629 00:37:03.629 Latency(us) 00:37:03.629 [2024-11-20T05:47:23.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.629 [2024-11-20T05:47:23.908Z] =================================================================================================================== 00:37:03.629 [2024-11-20T05:47:23.908Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3085445 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:03.629 rmmod nvme_tcp 00:37:03.629 rmmod nvme_fabrics 00:37:03.629 rmmod nvme_keyring 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3085161 ']' 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3085161 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3085161 ']' 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3085161 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3085161 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3085161' 00:37:03.629 killing process with pid 3085161 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3085161 00:37:03.629 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3085161 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.890 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:05.799 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:05.799 00:37:05.799 real 0m22.345s 00:37:05.799 user 0m24.530s 00:37:05.799 sys 0m7.374s 00:37:05.799 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:05.799 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:05.799 ************************************ 00:37:05.799 END TEST nvmf_queue_depth 00:37:05.799 ************************************ 00:37:06.060 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:06.060 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:06.060 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:06.060 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:06.060 ************************************ 00:37:06.060 START TEST nvmf_target_multipath 00:37:06.060 ************************************ 00:37:06.060 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:06.060 * Looking for test storage... 00:37:06.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:06.060 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:06.060 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:37:06.060 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.321 --rc genhtml_branch_coverage=1 00:37:06.321 --rc genhtml_function_coverage=1 00:37:06.321 --rc genhtml_legend=1 00:37:06.321 --rc geninfo_all_blocks=1 00:37:06.321 --rc geninfo_unexecuted_blocks=1 00:37:06.321 00:37:06.321 ' 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.321 --rc genhtml_branch_coverage=1 00:37:06.321 --rc genhtml_function_coverage=1 00:37:06.321 --rc genhtml_legend=1 00:37:06.321 --rc geninfo_all_blocks=1 00:37:06.321 --rc geninfo_unexecuted_blocks=1 00:37:06.321 00:37:06.321 ' 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.321 --rc genhtml_branch_coverage=1 00:37:06.321 --rc genhtml_function_coverage=1 00:37:06.321 --rc genhtml_legend=1 00:37:06.321 --rc geninfo_all_blocks=1 00:37:06.321 --rc geninfo_unexecuted_blocks=1 00:37:06.321 00:37:06.321 ' 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:06.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.321 --rc genhtml_branch_coverage=1 00:37:06.321 --rc genhtml_function_coverage=1 00:37:06.321 --rc genhtml_legend=1 00:37:06.321 --rc geninfo_all_blocks=1 00:37:06.321 --rc geninfo_unexecuted_blocks=1 00:37:06.321 00:37:06.321 ' 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:06.321 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:37:06.322 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.458 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:14.459 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:14.459 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:14.459 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:14.459 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:37:14.459 00:37:14.459 --- 10.0.0.2 ping statistics --- 00:37:14.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.459 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:37:14.459 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:37:14.460 00:37:14.460 --- 10.0.0.1 ping statistics --- 00:37:14.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.460 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:37:14.460 only one NIC for nvmf test 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:14.460 rmmod nvme_tcp 00:37:14.460 rmmod nvme_fabrics 00:37:14.460 rmmod nvme_keyring 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:14.460 06:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:37:14.460 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:14.460 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:14.460 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:14.460 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:14.460 06:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:15.845 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:15.846 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:15.846 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:37:15.846 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:15.846 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:37:15.846 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:15.846 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:15.846 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.846 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:15.846 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:16.107 00:37:16.107 real 0m9.974s 00:37:16.107 user 0m2.222s 00:37:16.107 sys 0m5.713s 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:16.107 ************************************ 00:37:16.107 END TEST nvmf_target_multipath 00:37:16.107 ************************************ 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:16.107 ************************************ 00:37:16.107 START TEST nvmf_zcopy 00:37:16.107 ************************************ 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:16.107 * Looking for test storage... 00:37:16.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:37:16.107 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:16.368 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:16.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.369 --rc genhtml_branch_coverage=1 00:37:16.369 --rc genhtml_function_coverage=1 00:37:16.369 --rc genhtml_legend=1 00:37:16.369 --rc geninfo_all_blocks=1 00:37:16.369 --rc geninfo_unexecuted_blocks=1 00:37:16.369 00:37:16.369 ' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:16.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.369 --rc genhtml_branch_coverage=1 00:37:16.369 --rc genhtml_function_coverage=1 00:37:16.369 --rc genhtml_legend=1 00:37:16.369 --rc geninfo_all_blocks=1 00:37:16.369 --rc geninfo_unexecuted_blocks=1 00:37:16.369 00:37:16.369 ' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:16.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.369 --rc genhtml_branch_coverage=1 00:37:16.369 --rc genhtml_function_coverage=1 00:37:16.369 --rc genhtml_legend=1 00:37:16.369 --rc geninfo_all_blocks=1 00:37:16.369 --rc geninfo_unexecuted_blocks=1 00:37:16.369 00:37:16.369 ' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:16.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.369 --rc genhtml_branch_coverage=1 00:37:16.369 --rc genhtml_function_coverage=1 00:37:16.369 --rc genhtml_legend=1 00:37:16.369 --rc geninfo_all_blocks=1 00:37:16.369 --rc geninfo_unexecuted_blocks=1 00:37:16.369 00:37:16.369 ' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:37:16.369 06:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:24.506 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:24.506 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:24.506 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:24.506 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:24.506 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:24.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:24.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:37:24.507 00:37:24.507 --- 10.0.0.2 ping statistics --- 00:37:24.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.507 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:24.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:24.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:37:24.507 00:37:24.507 --- 10.0.0.1 ping statistics --- 00:37:24.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.507 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3095775 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3095775 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3095775 ']' 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:24.507 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:24.507 [2024-11-20 06:47:43.873127] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:24.507 [2024-11-20 06:47:43.874434] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:37:24.507 [2024-11-20 06:47:43.874482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.507 [2024-11-20 06:47:43.968341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.507 [2024-11-20 06:47:44.003202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:24.507 [2024-11-20 06:47:44.003234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:24.507 [2024-11-20 06:47:44.003243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:24.507 [2024-11-20 06:47:44.003251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:24.507 [2024-11-20 06:47:44.003258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:24.507 [2024-11-20 06:47:44.003803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.507 [2024-11-20 06:47:44.059222] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:24.507 [2024-11-20 06:47:44.059475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:24.507 [2024-11-20 06:47:44.692557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:24.507 [2024-11-20 06:47:44.720802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:24.507 malloc0 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:37:24.507 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:37:24.508 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:37:24.508 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:24.508 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:24.508 { 00:37:24.508 "params": { 00:37:24.508 "name": "Nvme$subsystem", 00:37:24.508 "trtype": "$TEST_TRANSPORT", 00:37:24.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:24.508 "adrfam": "ipv4", 00:37:24.508 "trsvcid": "$NVMF_PORT", 00:37:24.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:24.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:24.508 "hdgst": ${hdgst:-false}, 00:37:24.508 "ddgst": ${ddgst:-false} 00:37:24.508 }, 00:37:24.508 "method": "bdev_nvme_attach_controller" 00:37:24.508 } 00:37:24.508 EOF 00:37:24.508 )") 00:37:24.508 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:37:24.508 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:37:24.767 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:37:24.767 06:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:24.767 "params": { 00:37:24.767 "name": "Nvme1", 00:37:24.767 "trtype": "tcp", 00:37:24.767 "traddr": "10.0.0.2", 00:37:24.767 "adrfam": "ipv4", 00:37:24.767 "trsvcid": "4420", 00:37:24.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:24.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:24.767 "hdgst": false, 00:37:24.767 "ddgst": false 00:37:24.767 }, 00:37:24.767 "method": "bdev_nvme_attach_controller" 00:37:24.767 }' 00:37:24.767 [2024-11-20 06:47:44.819236] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:37:24.767 [2024-11-20 06:47:44.819287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096116 ] 00:37:24.767 [2024-11-20 06:47:44.904989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.767 [2024-11-20 06:47:44.941970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.027 Running I/O for 10 seconds... 00:37:26.906 6394.00 IOPS, 49.95 MiB/s [2024-11-20T05:47:48.126Z] 6531.50 IOPS, 51.03 MiB/s [2024-11-20T05:47:49.507Z] 6542.67 IOPS, 51.11 MiB/s [2024-11-20T05:47:50.450Z] 6561.00 IOPS, 51.26 MiB/s [2024-11-20T05:47:51.390Z] 6727.20 IOPS, 52.56 MiB/s [2024-11-20T05:47:52.332Z] 7211.50 IOPS, 56.34 MiB/s [2024-11-20T05:47:53.273Z] 7559.43 IOPS, 59.06 MiB/s [2024-11-20T05:47:54.214Z] 7815.75 IOPS, 61.06 MiB/s [2024-11-20T05:47:55.153Z] 8016.44 IOPS, 62.63 MiB/s [2024-11-20T05:47:55.153Z] 8178.70 IOPS, 63.90 MiB/s 00:37:34.874 Latency(us) 00:37:34.874 [2024-11-20T05:47:55.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.874 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:37:34.874 Verification LBA range: start 0x0 length 0x1000 00:37:34.874 Nvme1n1 : 10.01 8182.01 63.92 0.00 0.00 15597.83 2061.65 26432.85 00:37:34.874 [2024-11-20T05:47:55.153Z] =================================================================================================================== 00:37:34.874 [2024-11-20T05:47:55.153Z] Total : 8182.01 63.92 0.00 0.00 15597.83 2061.65 26432.85 00:37:35.133 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3098088 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:35.134 { 00:37:35.134 "params": { 00:37:35.134 "name": "Nvme$subsystem", 00:37:35.134 "trtype": "$TEST_TRANSPORT", 00:37:35.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.134 "adrfam": "ipv4", 00:37:35.134 "trsvcid": "$NVMF_PORT", 00:37:35.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.134 "hdgst": ${hdgst:-false}, 00:37:35.134 "ddgst": ${ddgst:-false} 00:37:35.134 }, 00:37:35.134 "method": "bdev_nvme_attach_controller" 00:37:35.134 } 00:37:35.134 EOF 00:37:35.134 )") 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:37:35.134 [2024-11-20 06:47:55.228123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.228152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:37:35.134 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:35.134 "params": { 00:37:35.134 "name": "Nvme1", 00:37:35.134 "trtype": "tcp", 00:37:35.134 "traddr": "10.0.0.2", 00:37:35.134 "adrfam": "ipv4", 00:37:35.134 "trsvcid": "4420", 00:37:35.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:35.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:35.134 "hdgst": false, 00:37:35.134 "ddgst": false 00:37:35.134 }, 00:37:35.134 "method": "bdev_nvme_attach_controller" 00:37:35.134 }' 00:37:35.134 [2024-11-20 06:47:55.240092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.240102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.252089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.252098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.264089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.264098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.271397] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:37:35.134 [2024-11-20 06:47:55.271447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098088 ] 00:37:35.134 [2024-11-20 06:47:55.276090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.276098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.288089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.288098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.300089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.300098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.312089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.312098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.324089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.324098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.336090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.336098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.348089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.348097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.352938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.134 [2024-11-20 06:47:55.360090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.360098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.372090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.372099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.382283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.134 [2024-11-20 06:47:55.384089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.384098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.396094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.396104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.134 [2024-11-20 06:47:55.408094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.134 [2024-11-20 06:47:55.408106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.420091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.420103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.432091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.432100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.444089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.444097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.456097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.456113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.468092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.468102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.480091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.480106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.492090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.492098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.504089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.504097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.516088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.516097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.528126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.528136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.540090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.540099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.552089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.552097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.564090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.564099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.576091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.576101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.588089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.588097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.600089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.600097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.612089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.612096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.624090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.624099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.636089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.636097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.648089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.648097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.395 [2024-11-20 06:47:55.660090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.395 [2024-11-20 06:47:55.660099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.672097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.672113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 Running I/O for 5 seconds... 00:37:35.656 [2024-11-20 06:47:55.689080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.689097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.703274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.703290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.716304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.716320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.729534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.729550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.743544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.743559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.756755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.756769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.771634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.771649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.784680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.784694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.799472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.799488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.812737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.812752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.827445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.827460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.840697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.840712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.855179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.855194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.868307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.868322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.880844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.880859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.895439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.895454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.908490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.908505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.656 [2024-11-20 06:47:55.923278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.656 [2024-11-20 06:47:55.923293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.916 [2024-11-20 06:47:55.936670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.916 [2024-11-20 06:47:55.936684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.916 [2024-11-20 06:47:55.951324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.916 [2024-11-20 06:47:55.951339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:55.964334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:55.964349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:55.977227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:55.977241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:55.991406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:55.991421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.004440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.004454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.019717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.019732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.032870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.032885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.047526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.047541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.060765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.060779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.075676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.075691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.088828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.088842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.101414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.101429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.114626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.114641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.127633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.127648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.140912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.140927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.155197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.155212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.168190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.168205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:35.917 [2024-11-20 06:47:56.180993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:35.917 [2024-11-20 06:47:56.181008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.195289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.195304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.208473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.208488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.223206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.223220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.236198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.236213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.248793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.248808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.263222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.263237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.276346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.276361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.289417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.289433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.302934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.302950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.315910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.315925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.328585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.328600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.343229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.343244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.356536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.356551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.371277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.371292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.384123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.384142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.397007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.397022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.410708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.410723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.423492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.423508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.436089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.436104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.178 [2024-11-20 06:47:56.449361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.178 [2024-11-20 06:47:56.449377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.463432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.463449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.476415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.476437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.491528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.491543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.504430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.504445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.518965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.518981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.532142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.532161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.545027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.545043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.559312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.559328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.572618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.572633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.587005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.587020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.600202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.600217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.612844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.612859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.626732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.626748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.640241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.640257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.653163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.653179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.667209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.667225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.680199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.680215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 18960.00 IOPS, 148.12 MiB/s [2024-11-20T05:47:56.718Z] [2024-11-20 06:47:56.692912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.692927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.439 [2024-11-20 06:47:56.707746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.439 [2024-11-20 06:47:56.707761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.720996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.721012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.735361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.735382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.748886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.748902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.763487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.763502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.776552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.776567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.791513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.791528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.804838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.804854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.819225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.819241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.832265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.832280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.844989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.845005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.859275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.859291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.872647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.872662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.887362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.887377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.900687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.900702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.914848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.914863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.927892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.927907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.941619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.941634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.955555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.955570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.699 [2024-11-20 06:47:56.968807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.699 [2024-11-20 06:47:56.968823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.960 [2024-11-20 06:47:56.983011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.960 [2024-11-20 06:47:56.983027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.960 [2024-11-20 06:47:56.996180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.960 [2024-11-20 06:47:56.996200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.960 [2024-11-20 06:47:57.009030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.960 [2024-11-20 06:47:57.009046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.023414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.023429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.036627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.036643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.051630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.051646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.065004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.065018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.079252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.079267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.092509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.092524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.107037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.107053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.120382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.120396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.135101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.135116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.148241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.148256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.160988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.161003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.175247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.175263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.188250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.188265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.200783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.200799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.215459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.215474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:36.961 [2024-11-20 06:47:57.228294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:36.961 [2024-11-20 06:47:57.228308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.241134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.241150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.255891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.255907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.268773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.268788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.283027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.283042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.296009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.296024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.309278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.309292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.323375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.323390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.336581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.336595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.351879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.351894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.365173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.365188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.379570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.379585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.392522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.392537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.406997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.221 [2024-11-20 06:47:57.407012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.221 [2024-11-20 06:47:57.420219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.222 [2024-11-20 06:47:57.420234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.222 [2024-11-20 06:47:57.433234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.222 [2024-11-20 06:47:57.433249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.222 [2024-11-20 06:47:57.447443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.222 [2024-11-20 06:47:57.447458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.222 [2024-11-20 06:47:57.460645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.222 [2024-11-20 06:47:57.460659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.222 [2024-11-20 06:47:57.475121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.222 [2024-11-20 06:47:57.475136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.222 [2024-11-20 06:47:57.488458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.222 [2024-11-20 06:47:57.488472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.503087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.503103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.516231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.516247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.529094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.529108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.543630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.543645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.556938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.556952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.571353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.571368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.584137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.584151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.596764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.596778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.611355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.611371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.624394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.624410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.639254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.639270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.652107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.652122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.665084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.665100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.679763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.679778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 19005.00 IOPS, 148.48 MiB/s [2024-11-20T05:47:57.790Z] [2024-11-20 06:47:57.692892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.692906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.706934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.706949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.720064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.720079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.732712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.732727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.746778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.746794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.511 [2024-11-20 06:47:57.759945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.511 [2024-11-20 06:47:57.759960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.772614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.772628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.786848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.786863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.800069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.800084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.813122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.813136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.826991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.827006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.840139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.840154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.852811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.852825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.867411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.867427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.880367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.880382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.893173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.893188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.907236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.907252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.920400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.920414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.935187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.935202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.948225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.948240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.961145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.961164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.975315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.975329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:57.988380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:57.988394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:58.003064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:58.003079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:58.016011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:58.016030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:58.029405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:58.029420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.831 [2024-11-20 06:47:58.043250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.831 [2024-11-20 06:47:58.043265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.832 [2024-11-20 06:47:58.056043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.832 [2024-11-20 06:47:58.056058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.832 [2024-11-20 06:47:58.069535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.832 [2024-11-20 06:47:58.069551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:37.832 [2024-11-20 06:47:58.083285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:37.832 [2024-11-20 06:47:58.083301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.096418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.096434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.111257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.111273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.124211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.124226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.137454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.137469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.151752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.151768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.164900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.164915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.178895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.178911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.192066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.192081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.205081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.205096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.219057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.219073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.232164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.232180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.245070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.245086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.259015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.259031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.272412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.272431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.287624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.287640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.300474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.300489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.314957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.314972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.327987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.328002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.341397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.341412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.355430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.355446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.097 [2024-11-20 06:47:58.368326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.097 [2024-11-20 06:47:58.368341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.381390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.381406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.395487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.395502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.408580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.408595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.422874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.422889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.435845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.435861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.448896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.448911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.463403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.463419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.476645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.476660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.490776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.490792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.503929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.503945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.516715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.516729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.531376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.531395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.544467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.544482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.559243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.559259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.572264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.572279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.585732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.585747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.599738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.599753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.612802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.612817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.359 [2024-11-20 06:47:58.627592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.359 [2024-11-20 06:47:58.627607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.640967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.640982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.655370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.655386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.668578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.668592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.683231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.683247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 19020.00 IOPS, 148.59 MiB/s [2024-11-20T05:47:58.899Z] [2024-11-20 06:47:58.696415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.696430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.711281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.711297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.724333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.724348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.737124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.737139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.752190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.752205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.764879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.764894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.778974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.778990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.792010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.792026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.804606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.804621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.819535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.819550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.832972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.832987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.847632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.847648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.860498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.860512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.875683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.875698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.620 [2024-11-20 06:47:58.888784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.620 [2024-11-20 06:47:58.888799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:58.903206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:58.903222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:58.915930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:58.915945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:58.929543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:58.929557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:58.942948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:58.942963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:58.956181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:58.956197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:58.968860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:58.968875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:58.983515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:58.983530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:58.996707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:58.996721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.011288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.011303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.024520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.024534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.039589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.039605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.052784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.052799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.066836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.066851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.079852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.079867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.092994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.093009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.107383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.107398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.120509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.120523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.135114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.135128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:38.881 [2024-11-20 06:47:59.148127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:38.881 [2024-11-20 06:47:59.148142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.161018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.161034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.175852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.175867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.189133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.189147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.202956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.202971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.216015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.216030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.228884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.228899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.243654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.243669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.256978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.256993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.271197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.271212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.284210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.284225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.297398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.297412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.311463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.311478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.324276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.324292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.337206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.337221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.351647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.351663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.365001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.365016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.379001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.379016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.392197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.392213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.142 [2024-11-20 06:47:59.405533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.142 [2024-11-20 06:47:59.405547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.419335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.419351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.432284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.432299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.445382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.445397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.459172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.459188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.472379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.472394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.487247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.487262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.500346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.500361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.513114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.513128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.527392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.527407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.540397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.540412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.555072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.555091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.568023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.568038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.581123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.581138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.402 [2024-11-20 06:47:59.595363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.402 [2024-11-20 06:47:59.595378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.403 [2024-11-20 06:47:59.608343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.403 [2024-11-20 06:47:59.608358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.403 [2024-11-20 06:47:59.621112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.403 [2024-11-20 06:47:59.621126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.403 [2024-11-20 06:47:59.634808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.403 [2024-11-20 06:47:59.634823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.403 [2024-11-20 06:47:59.647635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.403 [2024-11-20 06:47:59.647650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.403 [2024-11-20 06:47:59.660942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.403 [2024-11-20 06:47:59.660956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.403 [2024-11-20 06:47:59.675415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.403 [2024-11-20 06:47:59.675431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.663 [2024-11-20 06:47:59.688497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.663 [2024-11-20 06:47:59.688512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.663 19020.25 IOPS, 148.60 MiB/s [2024-11-20T05:47:59.942Z] [2024-11-20 06:47:59.703233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.663 [2024-11-20 06:47:59.703248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.663 [2024-11-20 06:47:59.716261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.663 [2024-11-20 06:47:59.716276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.663 [2024-11-20 06:47:59.729292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.663 [2024-11-20 06:47:59.729307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.663 [2024-11-20 06:47:59.743250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.663 [2024-11-20 06:47:59.743265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.663 [2024-11-20 06:47:59.756368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.663 [2024-11-20 06:47:59.756383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.663 [2024-11-20 06:47:59.771259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.663 [2024-11-20 06:47:59.771274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.663 [2024-11-20 06:47:59.784271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.663 [2024-11-20 06:47:59.784286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.663 [2024-11-20 06:47:59.797482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.663 [2024-11-20 06:47:59.797497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.663 [2024-11-20 06:47:59.811351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.663 [2024-11-20 06:47:59.811370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.664 [2024-11-20 06:47:59.824496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.664 [2024-11-20 06:47:59.824510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.664 [2024-11-20 06:47:59.839882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.664 [2024-11-20 06:47:59.839897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.664 [2024-11-20 06:47:59.852817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.664 [2024-11-20 06:47:59.852832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.664 [2024-11-20 06:47:59.867263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.664 [2024-11-20 06:47:59.867279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.664 [2024-11-20 06:47:59.880268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.664 [2024-11-20 06:47:59.880283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.664 [2024-11-20 06:47:59.893397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.664 [2024-11-20 06:47:59.893413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.664 [2024-11-20 06:47:59.907463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.664 [2024-11-20 06:47:59.907480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.664 [2024-11-20 06:47:59.920749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.664 [2024-11-20 06:47:59.920764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.664 [2024-11-20 06:47:59.935863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.664 [2024-11-20 06:47:59.935878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:47:59.948994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:47:59.949010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:47:59.963241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:47:59.963256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:47:59.976297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:47:59.976313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:47:59.989225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:47:59.989240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.003751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.003767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.016888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.016904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.031016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.031032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.044221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.044236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.057187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.057202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.071708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.071728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.084854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.084869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.099468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.099483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.112583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.112598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.127228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.127244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.140172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.140187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.153344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.153359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.167676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.167691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.181136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.181151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:39.925 [2024-11-20 06:48:00.195926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:39.925 [2024-11-20 06:48:00.195942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.208842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.186 [2024-11-20 06:48:00.208857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.223076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.186 [2024-11-20 06:48:00.223092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.236150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.186 [2024-11-20 06:48:00.236170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.249338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.186 [2024-11-20 06:48:00.249353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.263138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.186 [2024-11-20 06:48:00.263153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.276265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.186 [2024-11-20 06:48:00.276281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.289399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.186 [2024-11-20 06:48:00.289415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.303394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.186 [2024-11-20 06:48:00.303410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.316197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.186 [2024-11-20 06:48:00.316212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.329264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.186 [2024-11-20 06:48:00.329278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.186 [2024-11-20 06:48:00.343181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.187 [2024-11-20 06:48:00.343196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.187 [2024-11-20 06:48:00.356171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.187 [2024-11-20 06:48:00.356187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.187 [2024-11-20 06:48:00.369282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.187 [2024-11-20 06:48:00.369297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.187 [2024-11-20 06:48:00.383202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.187 [2024-11-20 06:48:00.383218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.187 [2024-11-20 06:48:00.396423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.187 [2024-11-20 06:48:00.396438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.187 [2024-11-20 06:48:00.411641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.187 [2024-11-20 06:48:00.411657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.187 [2024-11-20 06:48:00.425002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.187 [2024-11-20 06:48:00.425017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.187 [2024-11-20 06:48:00.439341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.187 [2024-11-20 06:48:00.439356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.187 [2024-11-20 06:48:00.452780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.187 [2024-11-20 06:48:00.452795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.447 [2024-11-20 06:48:00.467277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.447 [2024-11-20 06:48:00.467293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.480286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.480301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.493345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.493360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.507180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.507195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.520133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.520149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.533040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.533055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.547599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.547614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.560779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.560794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.575370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.575385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.588647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.588662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.603396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.603411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.616882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.616896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.631328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.631343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.644346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.644361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.657351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.657367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.671812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.671828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.684774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.684789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 19017.40 IOPS, 148.57 MiB/s [2024-11-20T05:48:00.727Z] [2024-11-20 06:48:00.698039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.698054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 00:37:40.448 Latency(us) 00:37:40.448 [2024-11-20T05:48:00.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.448 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:37:40.448 Nvme1n1 : 5.01 19019.06 148.59 0.00 0.00 6724.05 2607.79 11687.25 00:37:40.448 [2024-11-20T05:48:00.727Z] =================================================================================================================== 00:37:40.448 [2024-11-20T05:48:00.727Z] Total : 19019.06 148.59 0.00 0.00 6724.05 2607.79 11687.25 00:37:40.448 [2024-11-20 06:48:00.708093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.708107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.448 [2024-11-20 06:48:00.720100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.448 [2024-11-20 06:48:00.720113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.709 [2024-11-20 06:48:00.732094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.709 [2024-11-20 06:48:00.732108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.709 [2024-11-20 06:48:00.744096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.709 [2024-11-20 06:48:00.744108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.709 [2024-11-20 06:48:00.756091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.709 [2024-11-20 06:48:00.756101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.709 [2024-11-20 06:48:00.768091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.709 [2024-11-20 06:48:00.768100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.709 [2024-11-20 06:48:00.780092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.709 [2024-11-20 06:48:00.780107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.709 [2024-11-20 06:48:00.792092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.709 [2024-11-20 06:48:00.792102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.709 [2024-11-20 06:48:00.804090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:40.709 [2024-11-20 06:48:00.804099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:40.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3098088) - No such process 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3098088 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:40.709 delay0 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.709 06:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:37:40.970 [2024-11-20 06:48:01.011334] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:47.550 [2024-11-20 06:48:07.541343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3f60 is same with the state(6) to be set 00:37:47.550 Initializing NVMe Controllers 00:37:47.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:47.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:47.550 Initialization complete. Launching workers. 00:37:47.550 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3363 00:37:47.550 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3637, failed to submit 46 00:37:47.550 success 3439, unsuccessful 198, failed 0 00:37:47.550 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:37:47.550 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:37:47.550 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:47.550 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:37:47.550 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.550 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.551 rmmod nvme_tcp 00:37:47.551 rmmod nvme_fabrics 00:37:47.551 rmmod nvme_keyring 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3095775 ']' 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3095775 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3095775 ']' 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3095775 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3095775 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3095775' 00:37:47.551 killing process with pid 3095775 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3095775 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3095775 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.551 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.092 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:50.092 00:37:50.092 real 0m33.670s 00:37:50.092 user 0m42.905s 00:37:50.092 sys 0m12.202s 00:37:50.092 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:50.092 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:50.092 ************************************ 00:37:50.092 END TEST nvmf_zcopy 00:37:50.092 ************************************ 00:37:50.092 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:50.092 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:50.092 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:50.092 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:50.092 ************************************ 00:37:50.092 START TEST nvmf_nmic 00:37:50.092 ************************************ 00:37:50.092 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:50.092 * Looking for test storage... 00:37:50.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:50.092 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:50.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.093 --rc genhtml_branch_coverage=1 00:37:50.093 --rc genhtml_function_coverage=1 00:37:50.093 --rc genhtml_legend=1 00:37:50.093 --rc geninfo_all_blocks=1 00:37:50.093 --rc geninfo_unexecuted_blocks=1 00:37:50.093 00:37:50.093 ' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:50.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.093 --rc genhtml_branch_coverage=1 00:37:50.093 --rc genhtml_function_coverage=1 00:37:50.093 --rc genhtml_legend=1 00:37:50.093 --rc geninfo_all_blocks=1 00:37:50.093 --rc geninfo_unexecuted_blocks=1 00:37:50.093 00:37:50.093 ' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:50.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.093 --rc genhtml_branch_coverage=1 00:37:50.093 --rc genhtml_function_coverage=1 00:37:50.093 --rc genhtml_legend=1 00:37:50.093 --rc geninfo_all_blocks=1 00:37:50.093 --rc geninfo_unexecuted_blocks=1 00:37:50.093 00:37:50.093 ' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:50.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.093 --rc genhtml_branch_coverage=1 00:37:50.093 --rc genhtml_function_coverage=1 00:37:50.093 --rc genhtml_legend=1 00:37:50.093 --rc geninfo_all_blocks=1 00:37:50.093 --rc geninfo_unexecuted_blocks=1 00:37:50.093 00:37:50.093 ' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:37:50.093 06:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:58.231 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:58.232 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:58.232 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:58.232 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:58.232 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:58.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:58.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:37:58.232 00:37:58.232 --- 10.0.0.2 ping statistics --- 00:37:58.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.232 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:58.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:58.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:37:58.232 00:37:58.232 --- 10.0.0.1 ping statistics --- 00:37:58.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.232 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3105036 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3105036 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:58.232 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3105036 ']' 00:37:58.233 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.233 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:58.233 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.233 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:58.233 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 [2024-11-20 06:48:17.496712] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:58.233 [2024-11-20 06:48:17.497687] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:37:58.233 [2024-11-20 06:48:17.497724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.233 [2024-11-20 06:48:17.591013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:58.233 [2024-11-20 06:48:17.628731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.233 [2024-11-20 06:48:17.628765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.233 [2024-11-20 06:48:17.628773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.233 [2024-11-20 06:48:17.628780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.233 [2024-11-20 06:48:17.628786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.233 [2024-11-20 06:48:17.630492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.233 [2024-11-20 06:48:17.630648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:58.233 [2024-11-20 06:48:17.630797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.233 [2024-11-20 06:48:17.630798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:58.233 [2024-11-20 06:48:17.687731] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:58.233 [2024-11-20 06:48:17.689368] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:58.233 [2024-11-20 06:48:17.689539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:58.233 [2024-11-20 06:48:17.690463] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:58.233 [2024-11-20 06:48:17.690474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 [2024-11-20 06:48:18.343622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 Malloc0 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 [2024-11-20 06:48:18.447918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:37:58.233 test case1: single bdev can't be used in multiple subsystems 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 [2024-11-20 06:48:18.483190] bdev.c:8318:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:37:58.233 [2024-11-20 06:48:18.483221] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:37:58.233 [2024-11-20 06:48:18.483230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:58.233 request: 00:37:58.233 { 00:37:58.233 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:37:58.233 "namespace": { 00:37:58.233 "bdev_name": "Malloc0", 00:37:58.233 "no_auto_visible": false 00:37:58.233 }, 00:37:58.233 "method": "nvmf_subsystem_add_ns", 00:37:58.233 "req_id": 1 00:37:58.233 } 00:37:58.233 Got JSON-RPC error response 00:37:58.233 response: 00:37:58.233 { 00:37:58.233 "code": -32602, 00:37:58.233 "message": "Invalid parameters" 00:37:58.233 } 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:37:58.233 Adding namespace failed - expected result. 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:37:58.233 test case2: host connect to nvmf target in multiple paths 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:58.233 [2024-11-20 06:48:18.495348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.233 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:58.803 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:37:59.064 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:37:59.064 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:37:59.064 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:37:59.064 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:37:59.064 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:38:01.605 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:38:01.605 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:38:01.605 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:38:01.605 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:38:01.605 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:38:01.605 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:38:01.605 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:01.605 [global] 00:38:01.605 thread=1 00:38:01.605 invalidate=1 00:38:01.605 rw=write 00:38:01.605 time_based=1 00:38:01.605 runtime=1 00:38:01.605 ioengine=libaio 00:38:01.605 direct=1 00:38:01.605 bs=4096 00:38:01.605 iodepth=1 00:38:01.605 norandommap=0 00:38:01.605 numjobs=1 00:38:01.605 00:38:01.605 verify_dump=1 00:38:01.605 verify_backlog=512 00:38:01.605 verify_state_save=0 00:38:01.605 do_verify=1 00:38:01.605 verify=crc32c-intel 00:38:01.605 [job0] 00:38:01.605 filename=/dev/nvme0n1 00:38:01.605 Could not set queue depth (nvme0n1) 00:38:01.605 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:01.605 fio-3.35 00:38:01.605 Starting 1 thread 00:38:02.989 00:38:02.989 job0: (groupid=0, jobs=1): err= 0: pid=3106073: Wed Nov 20 06:48:22 2024 00:38:02.989 read: IOPS=18, BW=75.3KiB/s (77.1kB/s)(76.0KiB/1009msec) 00:38:02.989 slat (nsec): min=30626, max=31891, avg=31079.79, stdev=304.39 00:38:02.989 clat (usec): min=40928, max=41078, avg=40966.48, stdev=36.61 00:38:02.989 lat (usec): min=40959, max=41109, avg=40997.56, stdev=36.59 00:38:02.989 clat percentiles (usec): 00:38:02.989 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:02.989 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:02.989 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:02.989 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:02.989 | 99.99th=[41157] 00:38:02.989 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:38:02.989 slat (nsec): min=10309, max=89343, avg=30137.04, stdev=12961.89 00:38:02.989 clat (usec): min=217, max=3974, avg=412.99, stdev=168.83 00:38:02.989 lat (usec): min=228, max=4063, avg=443.13, stdev=173.61 00:38:02.989 clat percentiles (usec): 00:38:02.989 | 1.00th=[ 247], 5.00th=[ 310], 10.00th=[ 322], 20.00th=[ 343], 00:38:02.989 | 30.00th=[ 363], 40.00th=[ 404], 50.00th=[ 424], 60.00th=[ 441], 00:38:02.989 | 70.00th=[ 449], 80.00th=[ 457], 90.00th=[ 469], 95.00th=[ 482], 00:38:02.989 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 3982], 99.95th=[ 3982], 00:38:02.989 | 99.99th=[ 3982] 00:38:02.989 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:38:02.989 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:02.989 lat (usec) : 250=1.13%, 500=93.41%, 750=1.69% 00:38:02.989 lat (msec) : 4=0.19%, 50=3.58% 00:38:02.989 cpu : usr=0.60%, sys=2.28%, ctx=531, majf=0, minf=1 00:38:02.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:02.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:02.989 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:02.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:02.989 00:38:02.989 Run status group 0 (all jobs): 00:38:02.989 READ: bw=75.3KiB/s (77.1kB/s), 75.3KiB/s-75.3KiB/s (77.1kB/s-77.1kB/s), io=76.0KiB (77.8kB), run=1009-1009msec 00:38:02.989 WRITE: bw=2030KiB/s (2078kB/s), 2030KiB/s-2030KiB/s (2078kB/s-2078kB/s), io=2048KiB (2097kB), run=1009-1009msec 00:38:02.989 00:38:02.989 Disk stats (read/write): 00:38:02.989 nvme0n1: ios=66/512, merge=0/0, ticks=715/174, in_queue=889, util=93.69% 00:38:02.989 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:02.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:38:02.989 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:02.989 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:38:02.989 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:38:02.989 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:02.989 rmmod nvme_tcp 00:38:02.989 rmmod nvme_fabrics 00:38:02.989 rmmod nvme_keyring 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3105036 ']' 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3105036 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3105036 ']' 00:38:02.989 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3105036 00:38:02.990 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:38:02.990 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:02.990 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3105036 00:38:02.990 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:02.990 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:02.990 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3105036' 00:38:02.990 killing process with pid 3105036 00:38:02.990 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3105036 00:38:02.990 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3105036 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.250 06:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.164 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:05.164 00:38:05.164 real 0m15.394s 00:38:05.164 user 0m31.921s 00:38:05.164 sys 0m7.344s 00:38:05.164 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:05.164 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:05.164 ************************************ 00:38:05.164 END TEST nvmf_nmic 00:38:05.164 ************************************ 00:38:05.164 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:05.164 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:05.164 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:05.164 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:05.427 ************************************ 00:38:05.427 START TEST nvmf_fio_target 00:38:05.427 ************************************ 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:05.427 * Looking for test storage... 00:38:05.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:05.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.427 --rc genhtml_branch_coverage=1 00:38:05.427 --rc genhtml_function_coverage=1 00:38:05.427 --rc genhtml_legend=1 00:38:05.427 --rc geninfo_all_blocks=1 00:38:05.427 --rc geninfo_unexecuted_blocks=1 00:38:05.427 00:38:05.427 ' 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:05.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.427 --rc genhtml_branch_coverage=1 00:38:05.427 --rc genhtml_function_coverage=1 00:38:05.427 --rc genhtml_legend=1 00:38:05.427 --rc geninfo_all_blocks=1 00:38:05.427 --rc geninfo_unexecuted_blocks=1 00:38:05.427 00:38:05.427 ' 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:05.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.427 --rc genhtml_branch_coverage=1 00:38:05.427 --rc genhtml_function_coverage=1 00:38:05.427 --rc genhtml_legend=1 00:38:05.427 --rc geninfo_all_blocks=1 00:38:05.427 --rc geninfo_unexecuted_blocks=1 00:38:05.427 00:38:05.427 ' 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:05.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.427 --rc genhtml_branch_coverage=1 00:38:05.427 --rc genhtml_function_coverage=1 00:38:05.427 --rc genhtml_legend=1 00:38:05.427 --rc geninfo_all_blocks=1 00:38:05.427 --rc geninfo_unexecuted_blocks=1 00:38:05.427 00:38:05.427 ' 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:05.427 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:38:05.428 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:13.576 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:13.576 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:13.576 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:13.576 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:13.576 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:13.577 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:13.577 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:13.577 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:13.577 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:13.577 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:13.577 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:13.577 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:13.577 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:13.577 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:13.577 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:13.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:13.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:38:13.577 00:38:13.577 --- 10.0.0.2 ping statistics --- 00:38:13.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.577 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:13.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:13.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:38:13.577 00:38:13.577 --- 10.0.0.1 ping statistics --- 00:38:13.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.577 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3110582 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3110582 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3110582 ']' 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:13.577 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:13.577 [2024-11-20 06:48:33.178035] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:13.577 [2024-11-20 06:48:33.179151] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:38:13.577 [2024-11-20 06:48:33.179206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.577 [2024-11-20 06:48:33.278677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:13.577 [2024-11-20 06:48:33.332439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.577 [2024-11-20 06:48:33.332496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.577 [2024-11-20 06:48:33.332504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.577 [2024-11-20 06:48:33.332512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.577 [2024-11-20 06:48:33.332519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.577 [2024-11-20 06:48:33.334643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.577 [2024-11-20 06:48:33.334806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:13.577 [2024-11-20 06:48:33.334970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:13.577 [2024-11-20 06:48:33.334971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.577 [2024-11-20 06:48:33.413556] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:13.577 [2024-11-20 06:48:33.414919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:13.577 [2024-11-20 06:48:33.414924] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:13.577 [2024-11-20 06:48:33.415453] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:13.577 [2024-11-20 06:48:33.415510] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:13.838 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:13.838 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:38:13.838 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:13.838 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:13.838 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:13.838 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:13.838 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:14.098 [2024-11-20 06:48:34.215986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.098 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:14.358 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:38:14.358 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:14.619 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:38:14.619 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:14.619 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:38:14.619 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:14.880 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:38:14.880 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:38:15.141 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:15.401 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:38:15.401 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:15.401 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:38:15.401 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:15.661 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:38:15.661 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:38:15.922 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:16.183 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:16.183 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:16.183 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:16.183 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:16.443 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:16.704 [2024-11-20 06:48:36.803974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:16.704 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:38:16.964 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:38:16.964 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:17.536 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:38:17.536 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:38:17.536 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:38:17.536 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:38:17.536 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:38:17.536 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:38:19.447 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:38:19.447 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:38:19.447 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:38:19.707 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:38:19.707 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:38:19.707 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:38:19.707 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:19.707 [global] 00:38:19.707 thread=1 00:38:19.707 invalidate=1 00:38:19.707 rw=write 00:38:19.707 time_based=1 00:38:19.707 runtime=1 00:38:19.707 ioengine=libaio 00:38:19.707 direct=1 00:38:19.707 bs=4096 00:38:19.707 iodepth=1 00:38:19.707 norandommap=0 00:38:19.707 numjobs=1 00:38:19.707 00:38:19.707 verify_dump=1 00:38:19.707 verify_backlog=512 00:38:19.707 verify_state_save=0 00:38:19.707 do_verify=1 00:38:19.707 verify=crc32c-intel 00:38:19.707 [job0] 00:38:19.707 filename=/dev/nvme0n1 00:38:19.707 [job1] 00:38:19.707 filename=/dev/nvme0n2 00:38:19.707 [job2] 00:38:19.707 filename=/dev/nvme0n3 00:38:19.707 [job3] 00:38:19.707 filename=/dev/nvme0n4 00:38:19.707 Could not set queue depth (nvme0n1) 00:38:19.707 Could not set queue depth (nvme0n2) 00:38:19.707 Could not set queue depth (nvme0n3) 00:38:19.707 Could not set queue depth (nvme0n4) 00:38:19.968 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:19.968 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:19.968 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:19.968 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:19.968 fio-3.35 00:38:19.968 Starting 4 threads 00:38:21.350 00:38:21.350 job0: (groupid=0, jobs=1): err= 0: pid=3112032: Wed Nov 20 06:48:41 2024 00:38:21.350 read: IOPS=14, BW=59.9KiB/s (61.4kB/s)(60.0KiB/1001msec) 00:38:21.350 slat (nsec): min=25869, max=26999, avg=26198.40, stdev=324.19 00:38:21.350 clat (usec): min=41011, max=42876, avg=41967.39, stdev=359.66 00:38:21.350 lat (usec): min=41037, max=42902, avg=41993.59, stdev=359.65 00:38:21.350 clat percentiles (usec): 00:38:21.351 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:38:21.351 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:38:21.351 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:38:21.351 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:38:21.351 | 99.99th=[42730] 00:38:21.351 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:38:21.351 slat (usec): min=9, max=41547, avg=192.50, stdev=2545.42 00:38:21.351 clat (usec): min=127, max=954, avg=522.47, stdev=148.15 00:38:21.351 lat (usec): min=137, max=42062, avg=714.97, stdev=2552.47 00:38:21.351 clat percentiles (usec): 00:38:21.351 | 1.00th=[ 247], 5.00th=[ 289], 10.00th=[ 330], 20.00th=[ 379], 00:38:21.351 | 30.00th=[ 433], 40.00th=[ 482], 50.00th=[ 529], 60.00th=[ 562], 00:38:21.351 | 70.00th=[ 603], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 766], 00:38:21.351 | 99.00th=[ 857], 99.50th=[ 906], 99.90th=[ 955], 99.95th=[ 955], 00:38:21.351 | 99.99th=[ 955] 00:38:21.351 bw ( KiB/s): min= 4096, max= 4096, per=44.54%, avg=4096.00, stdev= 0.00, samples=1 00:38:21.351 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:21.351 lat (usec) : 250=1.52%, 500=40.23%, 750=49.15%, 1000=6.26% 00:38:21.351 lat (msec) : 50=2.85% 00:38:21.351 cpu : usr=1.20%, sys=1.20%, ctx=531, majf=0, minf=1 00:38:21.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:21.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:21.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:21.351 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:21.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:21.351 job1: (groupid=0, jobs=1): err= 0: pid=3112050: Wed Nov 20 06:48:41 2024 00:38:21.351 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:21.351 slat (nsec): min=8729, max=61489, avg=26633.87, stdev=3692.45 00:38:21.351 clat (usec): min=715, max=1530, avg=1134.12, stdev=115.42 00:38:21.351 lat (usec): min=741, max=1573, avg=1160.75, stdev=115.39 00:38:21.351 clat percentiles (usec): 00:38:21.351 | 1.00th=[ 783], 5.00th=[ 922], 10.00th=[ 988], 20.00th=[ 1045], 00:38:21.351 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1172], 00:38:21.351 | 70.00th=[ 1205], 80.00th=[ 1221], 90.00th=[ 1270], 95.00th=[ 1303], 00:38:21.351 | 99.00th=[ 1369], 99.50th=[ 1401], 99.90th=[ 1532], 99.95th=[ 1532], 00:38:21.351 | 99.99th=[ 1532] 00:38:21.351 write: IOPS=629, BW=2517KiB/s (2578kB/s)(2520KiB/1001msec); 0 zone resets 00:38:21.351 slat (nsec): min=10087, max=69173, avg=32573.51, stdev=8243.89 00:38:21.351 clat (usec): min=129, max=983, avg=594.12, stdev=159.52 00:38:21.351 lat (usec): min=140, max=1017, avg=626.70, stdev=161.53 00:38:21.351 clat percentiles (usec): 00:38:21.351 | 1.00th=[ 219], 5.00th=[ 293], 10.00th=[ 375], 20.00th=[ 461], 00:38:21.351 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 611], 60.00th=[ 644], 00:38:21.351 | 70.00th=[ 685], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 840], 00:38:21.351 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 988], 99.95th=[ 988], 00:38:21.351 | 99.99th=[ 988] 00:38:21.351 bw ( KiB/s): min= 4096, max= 4096, per=44.54%, avg=4096.00, stdev= 0.00, samples=1 00:38:21.351 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:21.351 lat (usec) : 250=0.96%, 500=12.78%, 750=32.49%, 1000=14.45% 00:38:21.351 lat (msec) : 2=39.32% 00:38:21.351 cpu : usr=2.10%, sys=3.20%, ctx=1143, majf=0, minf=1 00:38:21.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:21.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:21.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:21.351 issued rwts: total=512,630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:21.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:21.351 job2: (groupid=0, jobs=1): err= 0: pid=3112069: Wed Nov 20 06:48:41 2024 00:38:21.351 read: IOPS=16, BW=67.6KiB/s (69.2kB/s)(68.0KiB/1006msec) 00:38:21.351 slat (nsec): min=9642, max=27833, avg=26294.00, stdev=4296.27 00:38:21.351 clat (usec): min=1117, max=42174, avg=39336.50, stdev=9857.46 00:38:21.351 lat (usec): min=1145, max=42201, avg=39362.79, stdev=9857.23 00:38:21.351 clat percentiles (usec): 00:38:21.351 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41157], 00:38:21.351 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:38:21.351 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:21.351 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:21.351 | 99.99th=[42206] 00:38:21.351 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:38:21.351 slat (nsec): min=9689, max=55435, avg=31888.78, stdev=9973.66 00:38:21.351 clat (usec): min=218, max=961, avg=615.96, stdev=123.56 00:38:21.351 lat (usec): min=254, max=997, avg=647.84, stdev=128.03 00:38:21.351 clat percentiles (usec): 00:38:21.351 | 1.00th=[ 322], 5.00th=[ 404], 10.00th=[ 461], 20.00th=[ 506], 00:38:21.351 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 660], 00:38:21.351 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 799], 00:38:21.351 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 963], 99.95th=[ 963], 00:38:21.351 | 99.99th=[ 963] 00:38:21.351 bw ( KiB/s): min= 4096, max= 4096, per=44.54%, avg=4096.00, stdev= 0.00, samples=1 00:38:21.351 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:21.351 lat (usec) : 250=0.19%, 500=18.53%, 750=65.22%, 1000=12.85% 00:38:21.351 lat (msec) : 2=0.19%, 50=3.02% 00:38:21.351 cpu : usr=0.80%, sys=2.29%, ctx=530, majf=0, minf=1 00:38:21.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:21.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:21.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:21.351 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:21.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:21.351 job3: (groupid=0, jobs=1): err= 0: pid=3112076: Wed Nov 20 06:48:41 2024 00:38:21.351 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:21.351 slat (nsec): min=7546, max=47939, avg=28263.71, stdev=2645.41 00:38:21.351 clat (usec): min=647, max=1243, avg=993.59, stdev=73.34 00:38:21.351 lat (usec): min=676, max=1271, avg=1021.85, stdev=73.40 00:38:21.351 clat percentiles (usec): 00:38:21.351 | 1.00th=[ 775], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 947], 00:38:21.351 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:38:21.351 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1106], 00:38:21.351 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1237], 99.95th=[ 1237], 00:38:21.351 | 99.99th=[ 1237] 00:38:21.351 write: IOPS=658, BW=2633KiB/s (2697kB/s)(2636KiB/1001msec); 0 zone resets 00:38:21.351 slat (usec): min=6, max=39135, avg=93.37, stdev=1524.38 00:38:21.351 clat (usec): min=303, max=943, avg=614.44, stdev=107.68 00:38:21.351 lat (usec): min=339, max=39874, avg=707.81, stdev=1533.72 00:38:21.351 clat percentiles (usec): 00:38:21.351 | 1.00th=[ 363], 5.00th=[ 433], 10.00th=[ 474], 20.00th=[ 529], 00:38:21.351 | 30.00th=[ 562], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 652], 00:38:21.351 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 783], 00:38:21.351 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 947], 99.95th=[ 947], 00:38:21.351 | 99.99th=[ 947] 00:38:21.351 bw ( KiB/s): min= 4096, max= 4096, per=44.54%, avg=4096.00, stdev= 0.00, samples=1 00:38:21.351 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:21.351 lat (usec) : 500=8.45%, 750=42.27%, 1000=28.44% 00:38:21.351 lat (msec) : 2=20.84% 00:38:21.351 cpu : usr=2.80%, sys=4.20%, ctx=1176, majf=0, minf=1 00:38:21.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:21.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:21.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:21.351 issued rwts: total=512,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:21.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:21.351 00:38:21.351 Run status group 0 (all jobs): 00:38:21.351 READ: bw=4199KiB/s (4300kB/s), 59.9KiB/s-2046KiB/s (61.4kB/s-2095kB/s), io=4224KiB (4325kB), run=1001-1006msec 00:38:21.351 WRITE: bw=9197KiB/s (9418kB/s), 2036KiB/s-2633KiB/s (2085kB/s-2697kB/s), io=9252KiB (9474kB), run=1001-1006msec 00:38:21.351 00:38:21.351 Disk stats (read/write): 00:38:21.351 nvme0n1: ios=60/512, merge=0/0, ticks=1052/259, in_queue=1311, util=86.67% 00:38:21.351 nvme0n2: ios=487/512, merge=0/0, ticks=994/283, in_queue=1277, util=87.74% 00:38:21.351 nvme0n3: ios=34/512, merge=0/0, ticks=1340/247, in_queue=1587, util=91.75% 00:38:21.351 nvme0n4: ios=505/512, merge=0/0, ticks=1037/245, in_queue=1282, util=97.00% 00:38:21.351 06:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:38:21.351 [global] 00:38:21.351 thread=1 00:38:21.351 invalidate=1 00:38:21.351 rw=randwrite 00:38:21.351 time_based=1 00:38:21.351 runtime=1 00:38:21.351 ioengine=libaio 00:38:21.351 direct=1 00:38:21.351 bs=4096 00:38:21.351 iodepth=1 00:38:21.351 norandommap=0 00:38:21.351 numjobs=1 00:38:21.351 00:38:21.351 verify_dump=1 00:38:21.351 verify_backlog=512 00:38:21.351 verify_state_save=0 00:38:21.351 do_verify=1 00:38:21.351 verify=crc32c-intel 00:38:21.351 [job0] 00:38:21.351 filename=/dev/nvme0n1 00:38:21.351 [job1] 00:38:21.351 filename=/dev/nvme0n2 00:38:21.351 [job2] 00:38:21.351 filename=/dev/nvme0n3 00:38:21.351 [job3] 00:38:21.351 filename=/dev/nvme0n4 00:38:21.351 Could not set queue depth (nvme0n1) 00:38:21.351 Could not set queue depth (nvme0n2) 00:38:21.351 Could not set queue depth (nvme0n3) 00:38:21.351 Could not set queue depth (nvme0n4) 00:38:21.612 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:21.612 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:21.612 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:21.612 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:21.612 fio-3.35 00:38:21.612 Starting 4 threads 00:38:22.995 00:38:22.995 job0: (groupid=0, jobs=1): err= 0: pid=3112489: Wed Nov 20 06:48:43 2024 00:38:22.995 read: IOPS=16, BW=66.3KiB/s (67.9kB/s)(68.0KiB/1025msec) 00:38:22.995 slat (nsec): min=26019, max=26668, avg=26288.82, stdev=178.00 00:38:22.995 clat (usec): min=1109, max=42049, avg=39327.43, stdev=9856.55 00:38:22.995 lat (usec): min=1136, max=42075, avg=39353.72, stdev=9856.58 00:38:22.995 clat percentiles (usec): 00:38:22.995 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41157], 00:38:22.995 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:38:22.995 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:22.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:22.995 | 99.99th=[42206] 00:38:22.995 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:38:22.995 slat (nsec): min=8969, max=50806, avg=29805.18, stdev=8416.00 00:38:22.995 clat (usec): min=270, max=1084, avg=657.89, stdev=136.37 00:38:22.995 lat (usec): min=290, max=1116, avg=687.70, stdev=138.58 00:38:22.995 clat percentiles (usec): 00:38:22.995 | 1.00th=[ 306], 5.00th=[ 424], 10.00th=[ 478], 20.00th=[ 545], 00:38:22.995 | 30.00th=[ 586], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 693], 00:38:22.995 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 824], 95.00th=[ 889], 00:38:22.995 | 99.00th=[ 988], 99.50th=[ 1020], 99.90th=[ 1090], 99.95th=[ 1090], 00:38:22.995 | 99.99th=[ 1090] 00:38:22.995 bw ( KiB/s): min= 4096, max= 4096, per=48.39%, avg=4096.00, stdev= 0.00, samples=1 00:38:22.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:22.995 lat (usec) : 500=12.67%, 750=63.52%, 1000=19.85% 00:38:22.995 lat (msec) : 2=0.95%, 50=3.02% 00:38:22.995 cpu : usr=0.88%, sys=2.15%, ctx=529, majf=0, minf=1 00:38:22.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:22.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:22.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:22.995 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:22.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:22.995 job1: (groupid=0, jobs=1): err= 0: pid=3112505: Wed Nov 20 06:48:43 2024 00:38:22.995 read: IOPS=18, BW=74.4KiB/s (76.1kB/s)(76.0KiB/1022msec) 00:38:22.995 slat (nsec): min=27219, max=28169, avg=27619.84, stdev=259.62 00:38:22.995 clat (usec): min=40802, max=41144, avg=40972.21, stdev=90.99 00:38:22.995 lat (usec): min=40830, max=41172, avg=40999.83, stdev=90.93 00:38:22.995 clat percentiles (usec): 00:38:22.995 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:38:22.995 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:22.995 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:22.995 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:22.995 | 99.99th=[41157] 00:38:22.995 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:38:22.995 slat (nsec): min=9903, max=53193, avg=29700.37, stdev=9643.34 00:38:22.995 clat (usec): min=109, max=737, avg=436.18, stdev=92.09 00:38:22.995 lat (usec): min=120, max=771, avg=465.88, stdev=95.34 00:38:22.995 clat percentiles (usec): 00:38:22.995 | 1.00th=[ 227], 5.00th=[ 281], 10.00th=[ 318], 20.00th=[ 359], 00:38:22.995 | 30.00th=[ 383], 40.00th=[ 424], 50.00th=[ 453], 60.00th=[ 474], 00:38:22.995 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 570], 00:38:22.995 | 99.00th=[ 635], 99.50th=[ 676], 99.90th=[ 742], 99.95th=[ 742], 00:38:22.995 | 99.99th=[ 742] 00:38:22.995 bw ( KiB/s): min= 4096, max= 4096, per=48.39%, avg=4096.00, stdev= 0.00, samples=1 00:38:22.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:22.995 lat (usec) : 250=2.82%, 500=71.94%, 750=21.66% 00:38:22.995 lat (msec) : 50=3.58% 00:38:22.995 cpu : usr=0.49%, sys=1.76%, ctx=532, majf=0, minf=1 00:38:22.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:22.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:22.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:22.995 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:22.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:22.995 job2: (groupid=0, jobs=1): err= 0: pid=3112523: Wed Nov 20 06:48:43 2024 00:38:22.995 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:22.995 slat (nsec): min=25720, max=59992, avg=26934.75, stdev=3252.41 00:38:22.995 clat (usec): min=785, max=1383, avg=1130.29, stdev=107.30 00:38:22.995 lat (usec): min=812, max=1409, avg=1157.23, stdev=107.16 00:38:22.995 clat percentiles (usec): 00:38:22.995 | 1.00th=[ 832], 5.00th=[ 930], 10.00th=[ 996], 20.00th=[ 1057], 00:38:22.995 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1156], 00:38:22.995 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1270], 95.00th=[ 1303], 00:38:22.995 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1385], 99.95th=[ 1385], 00:38:22.995 | 99.99th=[ 1385] 00:38:22.995 write: IOPS=632, BW=2529KiB/s (2590kB/s)(2532KiB/1001msec); 0 zone resets 00:38:22.995 slat (nsec): min=9432, max=52474, avg=31653.02, stdev=7668.31 00:38:22.995 clat (usec): min=179, max=996, avg=596.95, stdev=148.88 00:38:22.995 lat (usec): min=190, max=1029, avg=628.60, stdev=151.05 00:38:22.995 clat percentiles (usec): 00:38:22.995 | 1.00th=[ 260], 5.00th=[ 351], 10.00th=[ 392], 20.00th=[ 469], 00:38:22.995 | 30.00th=[ 515], 40.00th=[ 562], 50.00th=[ 611], 60.00th=[ 644], 00:38:22.995 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 857], 00:38:22.995 | 99.00th=[ 955], 99.50th=[ 963], 99.90th=[ 996], 99.95th=[ 996], 00:38:22.995 | 99.99th=[ 996] 00:38:22.995 bw ( KiB/s): min= 4096, max= 4096, per=48.39%, avg=4096.00, stdev= 0.00, samples=1 00:38:22.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:22.995 lat (usec) : 250=0.52%, 500=14.67%, 750=32.75%, 1000=12.40% 00:38:22.995 lat (msec) : 2=39.65% 00:38:22.995 cpu : usr=1.60%, sys=3.70%, ctx=1146, majf=0, minf=1 00:38:22.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:22.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:22.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:22.995 issued rwts: total=512,633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:22.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:22.995 job3: (groupid=0, jobs=1): err= 0: pid=3112530: Wed Nov 20 06:48:43 2024 00:38:22.995 read: IOPS=330, BW=1323KiB/s (1354kB/s)(1324KiB/1001msec) 00:38:22.995 slat (nsec): min=7048, max=44974, avg=24876.01, stdev=5746.13 00:38:22.995 clat (usec): min=694, max=42091, avg=2016.98, stdev=6268.78 00:38:22.995 lat (usec): min=725, max=42119, avg=2041.85, stdev=6269.32 00:38:22.995 clat percentiles (usec): 00:38:22.995 | 1.00th=[ 742], 5.00th=[ 848], 10.00th=[ 898], 20.00th=[ 947], 00:38:22.995 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1057], 00:38:22.995 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1254], 00:38:22.995 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:22.995 | 99.99th=[42206] 00:38:22.995 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:38:22.995 slat (nsec): min=9713, max=59264, avg=35433.32, stdev=6199.97 00:38:22.995 clat (usec): min=194, max=1650, avg=585.72, stdev=164.67 00:38:22.995 lat (usec): min=206, max=1684, avg=621.15, stdev=165.83 00:38:22.995 clat percentiles (usec): 00:38:22.995 | 1.00th=[ 269], 5.00th=[ 326], 10.00th=[ 375], 20.00th=[ 441], 00:38:22.995 | 30.00th=[ 486], 40.00th=[ 537], 50.00th=[ 586], 60.00th=[ 635], 00:38:22.995 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 840], 00:38:22.995 | 99.00th=[ 947], 99.50th=[ 988], 99.90th=[ 1647], 99.95th=[ 1647], 00:38:22.995 | 99.99th=[ 1647] 00:38:22.995 bw ( KiB/s): min= 4096, max= 4096, per=48.39%, avg=4096.00, stdev= 0.00, samples=1 00:38:22.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:22.995 lat (usec) : 250=0.36%, 500=19.22%, 750=32.03%, 1000=21.95% 00:38:22.995 lat (msec) : 2=25.50%, 50=0.95% 00:38:22.995 cpu : usr=0.60%, sys=4.20%, ctx=845, majf=0, minf=1 00:38:22.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:22.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:22.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:22.995 issued rwts: total=331,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:22.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:22.995 00:38:22.995 Run status group 0 (all jobs): 00:38:22.995 READ: bw=3430KiB/s (3513kB/s), 66.3KiB/s-2046KiB/s (67.9kB/s-2095kB/s), io=3516KiB (3600kB), run=1001-1025msec 00:38:22.995 WRITE: bw=8464KiB/s (8668kB/s), 1998KiB/s-2529KiB/s (2046kB/s-2590kB/s), io=8676KiB (8884kB), run=1001-1025msec 00:38:22.995 00:38:22.995 Disk stats (read/write): 00:38:22.995 nvme0n1: ios=62/512, merge=0/0, ticks=523/267, in_queue=790, util=87.98% 00:38:22.995 nvme0n2: ios=70/512, merge=0/0, ticks=762/217, in_queue=979, util=95.72% 00:38:22.995 nvme0n3: ios=494/512, merge=0/0, ticks=671/293, in_queue=964, util=99.16% 00:38:22.995 nvme0n4: ios=219/512, merge=0/0, ticks=1406/232, in_queue=1638, util=99.36% 00:38:22.995 06:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:38:22.995 [global] 00:38:22.995 thread=1 00:38:22.995 invalidate=1 00:38:22.995 rw=write 00:38:22.995 time_based=1 00:38:22.995 runtime=1 00:38:22.995 ioengine=libaio 00:38:22.995 direct=1 00:38:22.995 bs=4096 00:38:22.995 iodepth=128 00:38:22.995 norandommap=0 00:38:22.995 numjobs=1 00:38:22.995 00:38:22.995 verify_dump=1 00:38:22.995 verify_backlog=512 00:38:22.995 verify_state_save=0 00:38:22.995 do_verify=1 00:38:22.995 verify=crc32c-intel 00:38:22.995 [job0] 00:38:22.995 filename=/dev/nvme0n1 00:38:22.995 [job1] 00:38:22.995 filename=/dev/nvme0n2 00:38:22.995 [job2] 00:38:22.996 filename=/dev/nvme0n3 00:38:22.996 [job3] 00:38:22.996 filename=/dev/nvme0n4 00:38:22.996 Could not set queue depth (nvme0n1) 00:38:22.996 Could not set queue depth (nvme0n2) 00:38:22.996 Could not set queue depth (nvme0n3) 00:38:22.996 Could not set queue depth (nvme0n4) 00:38:23.255 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:23.255 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:23.255 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:23.255 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:23.255 fio-3.35 00:38:23.255 Starting 4 threads 00:38:24.635 00:38:24.635 job0: (groupid=0, jobs=1): err= 0: pid=3112933: Wed Nov 20 06:48:44 2024 00:38:24.635 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:38:24.635 slat (nsec): min=951, max=10001k, avg=81123.00, stdev=584663.86 00:38:24.635 clat (usec): min=1090, max=44250, avg=9816.45, stdev=5821.57 00:38:24.635 lat (usec): min=1094, max=44256, avg=9897.57, stdev=5881.89 00:38:24.635 clat percentiles (usec): 00:38:24.635 | 1.00th=[ 2073], 5.00th=[ 2573], 10.00th=[ 3687], 20.00th=[ 7373], 00:38:24.635 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 9110], 00:38:24.635 | 70.00th=[10421], 80.00th=[11469], 90.00th=[15926], 95.00th=[20841], 00:38:24.635 | 99.00th=[34866], 99.50th=[39060], 99.90th=[43254], 99.95th=[44303], 00:38:24.635 | 99.99th=[44303] 00:38:24.635 write: IOPS=5244, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1006msec); 0 zone resets 00:38:24.635 slat (nsec): min=1638, max=14892k, avg=95663.23, stdev=561453.88 00:38:24.635 clat (usec): min=852, max=49353, avg=14595.40, stdev=9622.03 00:38:24.635 lat (usec): min=861, max=49356, avg=14691.06, stdev=9670.06 00:38:24.635 clat percentiles (usec): 00:38:24.635 | 1.00th=[ 1450], 5.00th=[ 3228], 10.00th=[ 4555], 20.00th=[ 6521], 00:38:24.635 | 30.00th=[ 7504], 40.00th=[ 9634], 50.00th=[13304], 60.00th=[14615], 00:38:24.635 | 70.00th=[16319], 80.00th=[25035], 90.00th=[29754], 95.00th=[33162], 00:38:24.635 | 99.00th=[41157], 99.50th=[45351], 99.90th=[49546], 99.95th=[49546], 00:38:24.635 | 99.99th=[49546] 00:38:24.635 bw ( KiB/s): min=17392, max=23792, per=21.16%, avg=20592.00, stdev=4525.48, samples=2 00:38:24.635 iops : min= 4348, max= 5948, avg=5148.00, stdev=1131.37, samples=2 00:38:24.635 lat (usec) : 1000=0.09% 00:38:24.635 lat (msec) : 2=1.48%, 4=7.80%, 10=45.29%, 20=29.62%, 50=15.73% 00:38:24.635 cpu : usr=3.78%, sys=5.37%, ctx=526, majf=0, minf=1 00:38:24.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:38:24.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:24.635 issued rwts: total=5120,5276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:24.635 job1: (groupid=0, jobs=1): err= 0: pid=3112950: Wed Nov 20 06:48:44 2024 00:38:24.635 read: IOPS=7339, BW=28.7MiB/s (30.1MB/s)(29.9MiB/1044msec) 00:38:24.635 slat (nsec): min=942, max=17413k, avg=61937.74, stdev=458776.22 00:38:24.635 clat (usec): min=3060, max=52088, avg=8863.68, stdev=6072.33 00:38:24.635 lat (usec): min=3072, max=52094, avg=8925.62, stdev=6088.36 00:38:24.635 clat percentiles (usec): 00:38:24.635 | 1.00th=[ 5080], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6980], 00:38:24.635 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7701], 00:38:24.635 | 70.00th=[ 7963], 80.00th=[ 8356], 90.00th=[10159], 95.00th=[17171], 00:38:24.635 | 99.00th=[44827], 99.50th=[45351], 99.90th=[52167], 99.95th=[52167], 00:38:24.635 | 99.99th=[52167] 00:38:24.635 write: IOPS=7356, BW=28.7MiB/s (30.1MB/s)(30.0MiB/1044msec); 0 zone resets 00:38:24.635 slat (nsec): min=1604, max=46591k, avg=64259.09, stdev=684408.79 00:38:24.635 clat (usec): min=1511, max=70075, avg=7988.28, stdev=6159.09 00:38:24.635 lat (usec): min=2451, max=70086, avg=8052.54, stdev=6200.71 00:38:24.635 clat percentiles (usec): 00:38:24.635 | 1.00th=[ 4293], 5.00th=[ 5145], 10.00th=[ 5866], 20.00th=[ 6652], 00:38:24.635 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7177], 00:38:24.635 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 9241], 95.00th=[10552], 00:38:24.635 | 99.00th=[23725], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:38:24.635 | 99.99th=[69731] 00:38:24.635 bw ( KiB/s): min=28672, max=32768, per=31.57%, avg=30720.00, stdev=2896.31, samples=2 00:38:24.635 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:38:24.635 lat (msec) : 2=0.01%, 4=0.22%, 10=91.36%, 20=5.34%, 50=2.45% 00:38:24.635 lat (msec) : 100=0.61% 00:38:24.635 cpu : usr=4.60%, sys=5.56%, ctx=747, majf=0, minf=1 00:38:24.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:38:24.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:24.635 issued rwts: total=7662,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:24.635 job2: (groupid=0, jobs=1): err= 0: pid=3112969: Wed Nov 20 06:48:44 2024 00:38:24.635 read: IOPS=5026, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1004msec) 00:38:24.635 slat (nsec): min=987, max=11385k, avg=93975.72, stdev=690969.40 00:38:24.635 clat (usec): min=1124, max=47354, avg=12296.03, stdev=5902.06 00:38:24.635 lat (usec): min=4702, max=55120, avg=12390.01, stdev=5959.32 00:38:24.635 clat percentiles (usec): 00:38:24.635 | 1.00th=[ 5080], 5.00th=[ 6652], 10.00th=[ 7177], 20.00th=[ 8291], 00:38:24.635 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11338], 00:38:24.635 | 70.00th=[13435], 80.00th=[15795], 90.00th=[21890], 95.00th=[24511], 00:38:24.635 | 99.00th=[31327], 99.50th=[38011], 99.90th=[46924], 99.95th=[46924], 00:38:24.635 | 99.99th=[47449] 00:38:24.635 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:38:24.635 slat (nsec): min=1653, max=6847.4k, avg=97550.34, stdev=522794.21 00:38:24.635 clat (usec): min=3582, max=60462, avg=12634.72, stdev=8567.71 00:38:24.635 lat (usec): min=3591, max=61042, avg=12732.27, stdev=8622.96 00:38:24.635 clat percentiles (usec): 00:38:24.635 | 1.00th=[ 6063], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 7898], 00:38:24.635 | 30.00th=[ 8356], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:38:24.635 | 70.00th=[13698], 80.00th=[14746], 90.00th=[21103], 95.00th=[33162], 00:38:24.635 | 99.00th=[51643], 99.50th=[55837], 99.90th=[60556], 99.95th=[60556], 00:38:24.635 | 99.99th=[60556] 00:38:24.635 bw ( KiB/s): min=16384, max=24576, per=21.04%, avg=20480.00, stdev=5792.62, samples=2 00:38:24.635 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:38:24.635 lat (msec) : 2=0.01%, 4=0.11%, 10=53.46%, 20=35.01%, 50=10.72% 00:38:24.635 lat (msec) : 100=0.70% 00:38:24.635 cpu : usr=3.89%, sys=5.08%, ctx=443, majf=0, minf=1 00:38:24.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:38:24.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:24.635 issued rwts: total=5047,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:24.635 job3: (groupid=0, jobs=1): err= 0: pid=3112976: Wed Nov 20 06:48:44 2024 00:38:24.635 read: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1008msec) 00:38:24.635 slat (nsec): min=1005, max=10185k, avg=68901.61, stdev=534446.06 00:38:24.635 clat (usec): min=2860, max=27961, avg=8971.97, stdev=2778.89 00:38:24.635 lat (usec): min=4204, max=27971, avg=9040.87, stdev=2823.39 00:38:24.635 clat percentiles (usec): 00:38:24.635 | 1.00th=[ 5145], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7242], 00:38:24.635 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8356], 00:38:24.635 | 70.00th=[ 9372], 80.00th=[10814], 90.00th=[12518], 95.00th=[13698], 00:38:24.635 | 99.00th=[18744], 99.50th=[23987], 99.90th=[26870], 99.95th=[27919], 00:38:24.635 | 99.99th=[27919] 00:38:24.635 write: IOPS=7266, BW=28.4MiB/s (29.8MB/s)(28.6MiB/1008msec); 0 zone resets 00:38:24.635 slat (nsec): min=1773, max=7449.3k, avg=64182.62, stdev=444117.55 00:38:24.635 clat (usec): min=1231, max=52040, avg=8639.85, stdev=5802.40 00:38:24.635 lat (usec): min=1240, max=52043, avg=8704.03, stdev=5833.26 00:38:24.635 clat percentiles (usec): 00:38:24.635 | 1.00th=[ 3326], 5.00th=[ 4424], 10.00th=[ 4752], 20.00th=[ 5080], 00:38:24.635 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:38:24.635 | 70.00th=[ 8356], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[14615], 00:38:24.635 | 99.00th=[42730], 99.50th=[51119], 99.90th=[51643], 99.95th=[52167], 00:38:24.635 | 99.99th=[52167] 00:38:24.635 bw ( KiB/s): min=25728, max=31848, per=29.58%, avg=28788.00, stdev=4327.49, samples=2 00:38:24.635 iops : min= 6432, max= 7962, avg=7197.00, stdev=1081.87, samples=2 00:38:24.635 lat (msec) : 2=0.13%, 4=1.12%, 10=77.45%, 20=19.20%, 50=1.79% 00:38:24.635 lat (msec) : 100=0.32% 00:38:24.635 cpu : usr=5.06%, sys=6.95%, ctx=522, majf=0, minf=2 00:38:24.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:38:24.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:24.635 issued rwts: total=7168,7325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:24.635 00:38:24.635 Run status group 0 (all jobs): 00:38:24.635 READ: bw=93.5MiB/s (98.1MB/s), 19.6MiB/s-28.7MiB/s (20.6MB/s-30.1MB/s), io=97.6MiB (102MB), run=1004-1044msec 00:38:24.635 WRITE: bw=95.0MiB/s (99.7MB/s), 19.9MiB/s-28.7MiB/s (20.9MB/s-30.1MB/s), io=99.2MiB (104MB), run=1004-1044msec 00:38:24.635 00:38:24.635 Disk stats (read/write): 00:38:24.635 nvme0n1: ios=4657/4622, merge=0/0, ticks=40555/57542, in_queue=98097, util=86.77% 00:38:24.635 nvme0n2: ios=6189/6509, merge=0/0, ticks=26081/24536, in_queue=50617, util=90.52% 00:38:24.635 nvme0n3: ios=4104/4103, merge=0/0, ticks=23950/25836, in_queue=49786, util=95.04% 00:38:24.636 nvme0n4: ios=5708/6144, merge=0/0, ticks=49347/51587, in_queue=100934, util=97.12% 00:38:24.636 06:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:38:24.636 [global] 00:38:24.636 thread=1 00:38:24.636 invalidate=1 00:38:24.636 rw=randwrite 00:38:24.636 time_based=1 00:38:24.636 runtime=1 00:38:24.636 ioengine=libaio 00:38:24.636 direct=1 00:38:24.636 bs=4096 00:38:24.636 iodepth=128 00:38:24.636 norandommap=0 00:38:24.636 numjobs=1 00:38:24.636 00:38:24.636 verify_dump=1 00:38:24.636 verify_backlog=512 00:38:24.636 verify_state_save=0 00:38:24.636 do_verify=1 00:38:24.636 verify=crc32c-intel 00:38:24.636 [job0] 00:38:24.636 filename=/dev/nvme0n1 00:38:24.636 [job1] 00:38:24.636 filename=/dev/nvme0n2 00:38:24.636 [job2] 00:38:24.636 filename=/dev/nvme0n3 00:38:24.636 [job3] 00:38:24.636 filename=/dev/nvme0n4 00:38:24.636 Could not set queue depth (nvme0n1) 00:38:24.636 Could not set queue depth (nvme0n2) 00:38:24.636 Could not set queue depth (nvme0n3) 00:38:24.636 Could not set queue depth (nvme0n4) 00:38:25.203 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:25.203 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:25.203 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:25.203 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:25.203 fio-3.35 00:38:25.203 Starting 4 threads 00:38:26.145 00:38:26.145 job0: (groupid=0, jobs=1): err= 0: pid=3113442: Wed Nov 20 06:48:46 2024 00:38:26.145 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:38:26.145 slat (nsec): min=954, max=17883k, avg=122388.56, stdev=959386.45 00:38:26.145 clat (usec): min=2730, max=42332, avg=15821.26, stdev=6627.97 00:38:26.145 lat (usec): min=2807, max=42358, avg=15943.64, stdev=6684.82 00:38:26.145 clat percentiles (usec): 00:38:26.145 | 1.00th=[ 4113], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[ 9241], 00:38:26.145 | 30.00th=[11469], 40.00th=[12911], 50.00th=[14746], 60.00th=[16581], 00:38:26.145 | 70.00th=[20317], 80.00th=[21890], 90.00th=[25035], 95.00th=[27395], 00:38:26.145 | 99.00th=[31589], 99.50th=[31589], 99.90th=[35390], 99.95th=[37487], 00:38:26.145 | 99.99th=[42206] 00:38:26.145 write: IOPS=4115, BW=16.1MiB/s (16.9MB/s)(16.3MiB/1013msec); 0 zone resets 00:38:26.145 slat (nsec): min=1576, max=17292k, avg=108117.95, stdev=879841.38 00:38:26.145 clat (usec): min=1508, max=60017, avg=15175.58, stdev=7954.10 00:38:26.145 lat (usec): min=1520, max=60025, avg=15283.69, stdev=8002.41 00:38:26.145 clat percentiles (usec): 00:38:26.145 | 1.00th=[ 3654], 5.00th=[ 4228], 10.00th=[ 4621], 20.00th=[ 8356], 00:38:26.145 | 30.00th=[10814], 40.00th=[13173], 50.00th=[14353], 60.00th=[15270], 00:38:26.145 | 70.00th=[16057], 80.00th=[21890], 90.00th=[23725], 95.00th=[32113], 00:38:26.145 | 99.00th=[39584], 99.50th=[39584], 99.90th=[50594], 99.95th=[50594], 00:38:26.145 | 99.99th=[60031] 00:38:26.145 bw ( KiB/s): min=13328, max=19440, per=15.76%, avg=16384.00, stdev=4321.84, samples=2 00:38:26.145 iops : min= 3332, max= 4860, avg=4096.00, stdev=1080.46, samples=2 00:38:26.145 lat (msec) : 2=0.17%, 4=1.67%, 10=23.65%, 20=46.13%, 50=28.24% 00:38:26.145 lat (msec) : 100=0.13% 00:38:26.145 cpu : usr=3.06%, sys=4.55%, ctx=273, majf=0, minf=1 00:38:26.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:38:26.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:26.145 issued rwts: total=4096,4169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:26.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:26.145 job1: (groupid=0, jobs=1): err= 0: pid=3113447: Wed Nov 20 06:48:46 2024 00:38:26.145 read: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec) 00:38:26.145 slat (nsec): min=941, max=8325.8k, avg=57711.25, stdev=440005.20 00:38:26.145 clat (usec): min=2052, max=24728, avg=7796.47, stdev=2333.82 00:38:26.145 lat (usec): min=2058, max=24734, avg=7854.18, stdev=2353.53 00:38:26.145 clat percentiles (usec): 00:38:26.145 | 1.00th=[ 3425], 5.00th=[ 4883], 10.00th=[ 5669], 20.00th=[ 6194], 00:38:26.145 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7308], 60.00th=[ 7832], 00:38:26.145 | 70.00th=[ 8356], 80.00th=[ 9110], 90.00th=[10814], 95.00th=[12125], 00:38:26.145 | 99.00th=[15533], 99.50th=[17433], 99.90th=[24511], 99.95th=[24511], 00:38:26.145 | 99.99th=[24773] 00:38:26.145 write: IOPS=8923, BW=34.9MiB/s (36.6MB/s)(35.0MiB/1005msec); 0 zone resets 00:38:26.145 slat (nsec): min=1603, max=7504.0k, avg=51012.15, stdev=389102.35 00:38:26.145 clat (usec): min=1192, max=16763, avg=6618.03, stdev=1842.20 00:38:26.145 lat (usec): min=1220, max=16770, avg=6669.04, stdev=1863.37 00:38:26.145 clat percentiles (usec): 00:38:26.145 | 1.00th=[ 2802], 5.00th=[ 4080], 10.00th=[ 4424], 20.00th=[ 4948], 00:38:26.145 | 30.00th=[ 5800], 40.00th=[ 6325], 50.00th=[ 6652], 60.00th=[ 6980], 00:38:26.145 | 70.00th=[ 7111], 80.00th=[ 7504], 90.00th=[ 9110], 95.00th=[ 9896], 00:38:26.145 | 99.00th=[11994], 99.50th=[15401], 99.90th=[16712], 99.95th=[16712], 00:38:26.145 | 99.99th=[16712] 00:38:26.145 bw ( KiB/s): min=34416, max=36304, per=34.02%, avg=35360.00, stdev=1335.02, samples=2 00:38:26.145 iops : min= 8604, max= 9076, avg=8840.00, stdev=333.75, samples=2 00:38:26.145 lat (msec) : 2=0.12%, 4=3.19%, 10=87.84%, 20=8.72%, 50=0.12% 00:38:26.145 cpu : usr=5.28%, sys=8.37%, ctx=593, majf=0, minf=3 00:38:26.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:38:26.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:26.145 issued rwts: total=8704,8968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:26.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:26.145 job2: (groupid=0, jobs=1): err= 0: pid=3113460: Wed Nov 20 06:48:46 2024 00:38:26.145 read: IOPS=6065, BW=23.7MiB/s (24.8MB/s)(24.0MiB/1013msec) 00:38:26.145 slat (nsec): min=977, max=10814k, avg=76637.34, stdev=597584.49 00:38:26.145 clat (usec): min=1494, max=34856, avg=10769.43, stdev=4436.66 00:38:26.145 lat (usec): min=1507, max=34863, avg=10846.06, stdev=4455.76 00:38:26.145 clat percentiles (usec): 00:38:26.145 | 1.00th=[ 3163], 5.00th=[ 3785], 10.00th=[ 6063], 20.00th=[ 7308], 00:38:26.145 | 30.00th=[ 8455], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[11338], 00:38:26.145 | 70.00th=[12387], 80.00th=[13173], 90.00th=[15401], 95.00th=[18482], 00:38:26.145 | 99.00th=[27132], 99.50th=[29230], 99.90th=[34341], 99.95th=[34341], 00:38:26.145 | 99.99th=[34866] 00:38:26.145 write: IOPS=6440, BW=25.2MiB/s (26.4MB/s)(25.5MiB/1013msec); 0 zone resets 00:38:26.145 slat (nsec): min=1605, max=12368k, avg=71222.30, stdev=597352.97 00:38:26.145 clat (usec): min=829, max=53765, avg=9519.61, stdev=5537.62 00:38:26.145 lat (usec): min=843, max=53774, avg=9590.83, stdev=5567.90 00:38:26.145 clat percentiles (usec): 00:38:26.145 | 1.00th=[ 1598], 5.00th=[ 3654], 10.00th=[ 4817], 20.00th=[ 6521], 00:38:26.145 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 9634], 00:38:26.145 | 70.00th=[10814], 80.00th=[11994], 90.00th=[14091], 95.00th=[15401], 00:38:26.145 | 99.00th=[39584], 99.50th=[40109], 99.90th=[53740], 99.95th=[53740], 00:38:26.145 | 99.99th=[53740] 00:38:26.145 bw ( KiB/s): min=22688, max=28488, per=24.62%, avg=25588.00, stdev=4101.22, samples=2 00:38:26.145 iops : min= 5672, max= 7122, avg=6397.00, stdev=1025.30, samples=2 00:38:26.145 lat (usec) : 1000=0.01% 00:38:26.145 lat (msec) : 2=1.67%, 4=4.02%, 10=48.17%, 20=42.42%, 50=3.60% 00:38:26.145 lat (msec) : 100=0.12% 00:38:26.145 cpu : usr=5.93%, sys=5.63%, ctx=304, majf=0, minf=1 00:38:26.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:38:26.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:26.145 issued rwts: total=6144,6524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:26.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:26.145 job3: (groupid=0, jobs=1): err= 0: pid=3113467: Wed Nov 20 06:48:46 2024 00:38:26.145 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:38:26.145 slat (nsec): min=1004, max=13518k, avg=76572.61, stdev=617758.57 00:38:26.145 clat (usec): min=3518, max=36641, avg=10023.90, stdev=4169.85 00:38:26.145 lat (usec): min=3524, max=36646, avg=10100.47, stdev=4209.59 00:38:26.145 clat percentiles (usec): 00:38:26.145 | 1.00th=[ 4228], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7242], 00:38:26.145 | 30.00th=[ 7635], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9503], 00:38:26.145 | 70.00th=[10421], 80.00th=[12518], 90.00th=[15139], 95.00th=[16909], 00:38:26.145 | 99.00th=[29492], 99.50th=[32900], 99.90th=[35914], 99.95th=[36439], 00:38:26.145 | 99.99th=[36439] 00:38:26.145 write: IOPS=6599, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec); 0 zone resets 00:38:26.145 slat (nsec): min=1690, max=14789k, avg=69261.16, stdev=549203.13 00:38:26.145 clat (usec): min=1209, max=36623, avg=9163.67, stdev=3839.94 00:38:26.145 lat (usec): min=1219, max=36626, avg=9232.93, stdev=3865.72 00:38:26.145 clat percentiles (usec): 00:38:26.145 | 1.00th=[ 3392], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5866], 00:38:26.145 | 30.00th=[ 7242], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8979], 00:38:26.145 | 70.00th=[10290], 80.00th=[11731], 90.00th=[15270], 95.00th=[15795], 00:38:26.145 | 99.00th=[23462], 99.50th=[23462], 99.90th=[26346], 99.95th=[26346], 00:38:26.145 | 99.99th=[36439] 00:38:26.145 bw ( KiB/s): min=20488, max=32760, per=25.62%, avg=26624.00, stdev=8677.61, samples=2 00:38:26.145 iops : min= 5122, max= 8190, avg=6656.00, stdev=2169.40, samples=2 00:38:26.145 lat (msec) : 2=0.06%, 4=1.20%, 10=65.78%, 20=30.15%, 50=2.80% 00:38:26.145 cpu : usr=3.87%, sys=7.54%, ctx=400, majf=0, minf=1 00:38:26.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:38:26.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:26.145 issued rwts: total=6656,6659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:26.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:26.145 00:38:26.145 Run status group 0 (all jobs): 00:38:26.145 READ: bw=98.7MiB/s (104MB/s), 15.8MiB/s-33.8MiB/s (16.6MB/s-35.5MB/s), io=100MiB (105MB), run=1005-1013msec 00:38:26.145 WRITE: bw=101MiB/s (106MB/s), 16.1MiB/s-34.9MiB/s (16.9MB/s-36.6MB/s), io=103MiB (108MB), run=1005-1013msec 00:38:26.145 00:38:26.145 Disk stats (read/write): 00:38:26.145 nvme0n1: ios=3378/3584, merge=0/0, ticks=36984/40432, in_queue=77416, util=84.67% 00:38:26.145 nvme0n2: ios=7186/7175, merge=0/0, ticks=54622/45759, in_queue=100381, util=88.79% 00:38:26.146 nvme0n3: ios=5169/5201, merge=0/0, ticks=47099/40938, in_queue=88037, util=95.36% 00:38:26.146 nvme0n4: ios=5691/6144, merge=0/0, ticks=48995/50764, in_queue=99759, util=94.98% 00:38:26.146 06:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:38:26.406 06:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3113756 00:38:26.406 06:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:38:26.406 06:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:38:26.406 [global] 00:38:26.406 thread=1 00:38:26.406 invalidate=1 00:38:26.406 rw=read 00:38:26.406 time_based=1 00:38:26.406 runtime=10 00:38:26.406 ioengine=libaio 00:38:26.406 direct=1 00:38:26.406 bs=4096 00:38:26.406 iodepth=1 00:38:26.406 norandommap=1 00:38:26.406 numjobs=1 00:38:26.406 00:38:26.406 [job0] 00:38:26.406 filename=/dev/nvme0n1 00:38:26.406 [job1] 00:38:26.406 filename=/dev/nvme0n2 00:38:26.406 [job2] 00:38:26.406 filename=/dev/nvme0n3 00:38:26.406 [job3] 00:38:26.406 filename=/dev/nvme0n4 00:38:26.406 Could not set queue depth (nvme0n1) 00:38:26.406 Could not set queue depth (nvme0n2) 00:38:26.406 Could not set queue depth (nvme0n3) 00:38:26.406 Could not set queue depth (nvme0n4) 00:38:26.665 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:26.665 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:26.665 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:26.665 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:26.665 fio-3.35 00:38:26.665 Starting 4 threads 00:38:29.207 06:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:38:29.466 06:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:38:29.466 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:38:29.466 fio: pid=3113961, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:29.726 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11198464, buflen=4096 00:38:29.726 fio: pid=3113957, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:29.726 06:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:29.726 06:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:38:29.987 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:29.987 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:38:29.987 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=303104, buflen=4096 00:38:29.987 fio: pid=3113946, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:29.987 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:29.987 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:38:29.987 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=331776, buflen=4096 00:38:29.987 fio: pid=3113950, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:29.987 00:38:29.987 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3113946: Wed Nov 20 06:48:50 2024 00:38:29.987 read: IOPS=25, BW=99.0KiB/s (101kB/s)(296KiB/2989msec) 00:38:29.988 slat (usec): min=25, max=267, avg=29.70, stdev=28.16 00:38:29.988 clat (usec): min=862, max=42115, avg=40056.17, stdev=6573.25 00:38:29.988 lat (usec): min=891, max=42142, avg=40085.87, stdev=6570.69 00:38:29.988 clat percentiles (usec): 00:38:29.988 | 1.00th=[ 865], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:38:29.988 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:29.988 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:38:29.988 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:29.988 | 99.99th=[42206] 00:38:29.988 bw ( KiB/s): min= 96, max= 112, per=2.67%, avg=99.20, stdev= 7.16, samples=5 00:38:29.988 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:38:29.988 lat (usec) : 1000=1.33% 00:38:29.988 lat (msec) : 2=1.33%, 50=96.00% 00:38:29.988 cpu : usr=0.13%, sys=0.00%, ctx=77, majf=0, minf=2 00:38:29.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.988 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.988 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:29.988 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3113950: Wed Nov 20 06:48:50 2024 00:38:29.988 read: IOPS=25, BW=102KiB/s (104kB/s)(324KiB/3179msec) 00:38:29.988 slat (usec): min=21, max=25697, avg=344.48, stdev=2834.52 00:38:29.988 clat (usec): min=424, max=41983, avg=38618.87, stdev=9785.91 00:38:29.988 lat (usec): min=451, max=67049, avg=38967.28, stdev=10279.47 00:38:29.988 clat percentiles (usec): 00:38:29.988 | 1.00th=[ 424], 5.00th=[ 914], 10.00th=[40633], 20.00th=[41157], 00:38:29.988 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:29.988 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:38:29.988 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:29.988 | 99.99th=[42206] 00:38:29.988 bw ( KiB/s): min= 96, max= 112, per=2.75%, avg=102.50, stdev= 5.99, samples=6 00:38:29.988 iops : min= 24, max= 28, avg=25.50, stdev= 1.52, samples=6 00:38:29.988 lat (usec) : 500=1.22%, 750=1.22%, 1000=3.66% 00:38:29.988 lat (msec) : 50=92.68% 00:38:29.988 cpu : usr=0.13%, sys=0.00%, ctx=86, majf=0, minf=2 00:38:29.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.988 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.988 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:29.988 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3113957: Wed Nov 20 06:48:50 2024 00:38:29.988 read: IOPS=987, BW=3947KiB/s (4041kB/s)(10.7MiB/2771msec) 00:38:29.988 slat (usec): min=6, max=14060, avg=36.80, stdev=378.61 00:38:29.988 clat (usec): min=391, max=2151, avg=962.86, stdev=70.77 00:38:29.988 lat (usec): min=419, max=15058, avg=999.66, stdev=386.43 00:38:29.988 clat percentiles (usec): 00:38:29.988 | 1.00th=[ 742], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 930], 00:38:29.988 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:38:29.988 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1057], 00:38:29.988 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1205], 99.95th=[ 1303], 00:38:29.988 | 99.99th=[ 2147] 00:38:29.988 bw ( KiB/s): min= 4008, max= 4040, per=100.00%, avg=4017.60, stdev=13.15, samples=5 00:38:29.988 iops : min= 1002, max= 1010, avg=1004.40, stdev= 3.29, samples=5 00:38:29.988 lat (usec) : 500=0.04%, 750=1.06%, 1000=74.08% 00:38:29.988 lat (msec) : 2=24.75%, 4=0.04% 00:38:29.988 cpu : usr=1.95%, sys=3.75%, ctx=2737, majf=0, minf=1 00:38:29.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.988 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.988 issued rwts: total=2735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:29.988 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3113961: Wed Nov 20 06:48:50 2024 00:38:29.988 read: IOPS=24, BW=96.4KiB/s (98.7kB/s)(252KiB/2614msec) 00:38:29.988 slat (nsec): min=26898, max=42692, avg=27435.61, stdev=1956.54 00:38:29.988 clat (usec): min=876, max=42041, avg=41122.64, stdev=5166.30 00:38:29.988 lat (usec): min=919, max=42068, avg=41150.05, stdev=5164.35 00:38:29.988 clat percentiles (usec): 00:38:29.988 | 1.00th=[ 881], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:29.988 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:38:29.988 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:29.988 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:29.988 | 99.99th=[42206] 00:38:29.988 bw ( KiB/s): min= 88, max= 104, per=2.58%, avg=96.00, stdev= 5.66, samples=5 00:38:29.988 iops : min= 22, max= 26, avg=24.00, stdev= 1.41, samples=5 00:38:29.988 lat (usec) : 1000=1.56% 00:38:29.988 lat (msec) : 50=96.88% 00:38:29.988 cpu : usr=0.15%, sys=0.00%, ctx=64, majf=0, minf=2 00:38:29.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.988 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.988 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:29.988 00:38:29.988 Run status group 0 (all jobs): 00:38:29.988 READ: bw=3714KiB/s (3804kB/s), 96.4KiB/s-3947KiB/s (98.7kB/s-4041kB/s), io=11.5MiB (12.1MB), run=2614-3179msec 00:38:29.988 00:38:29.988 Disk stats (read/write): 00:38:29.988 nvme0n1: ios=70/0, merge=0/0, ticks=2803/0, in_queue=2803, util=94.72% 00:38:29.988 nvme0n2: ios=79/0, merge=0/0, ticks=3048/0, in_queue=3048, util=94.86% 00:38:29.988 nvme0n3: ios=2595/0, merge=0/0, ticks=2430/0, in_queue=2430, util=96.03% 00:38:29.988 nvme0n4: ios=62/0, merge=0/0, ticks=2551/0, in_queue=2551, util=96.46% 00:38:30.248 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:30.248 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:38:30.509 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:30.509 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:38:30.509 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:30.509 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:38:30.769 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:30.769 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3113756 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:31.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:38:31.030 nvmf hotplug test: fio failed as expected 00:38:31.030 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:31.290 rmmod nvme_tcp 00:38:31.290 rmmod nvme_fabrics 00:38:31.290 rmmod nvme_keyring 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3110582 ']' 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3110582 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3110582 ']' 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3110582 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:31.290 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3110582 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3110582' 00:38:31.550 killing process with pid 3110582 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3110582 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3110582 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:31.550 06:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:34.092 00:38:34.092 real 0m28.344s 00:38:34.092 user 2m20.612s 00:38:34.092 sys 0m11.864s 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:34.092 ************************************ 00:38:34.092 END TEST nvmf_fio_target 00:38:34.092 ************************************ 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:34.092 ************************************ 00:38:34.092 START TEST nvmf_bdevio 00:38:34.092 ************************************ 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:38:34.092 * Looking for test storage... 00:38:34.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:38:34.092 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:34.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.092 --rc genhtml_branch_coverage=1 00:38:34.092 --rc genhtml_function_coverage=1 00:38:34.092 --rc genhtml_legend=1 00:38:34.092 --rc geninfo_all_blocks=1 00:38:34.092 --rc geninfo_unexecuted_blocks=1 00:38:34.092 00:38:34.092 ' 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:34.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.092 --rc genhtml_branch_coverage=1 00:38:34.092 --rc genhtml_function_coverage=1 00:38:34.092 --rc genhtml_legend=1 00:38:34.092 --rc geninfo_all_blocks=1 00:38:34.092 --rc geninfo_unexecuted_blocks=1 00:38:34.092 00:38:34.092 ' 00:38:34.092 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:34.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.093 --rc genhtml_branch_coverage=1 00:38:34.093 --rc genhtml_function_coverage=1 00:38:34.093 --rc genhtml_legend=1 00:38:34.093 --rc geninfo_all_blocks=1 00:38:34.093 --rc geninfo_unexecuted_blocks=1 00:38:34.093 00:38:34.093 ' 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:34.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.093 --rc genhtml_branch_coverage=1 00:38:34.093 --rc genhtml_function_coverage=1 00:38:34.093 --rc genhtml_legend=1 00:38:34.093 --rc geninfo_all_blocks=1 00:38:34.093 --rc geninfo_unexecuted_blocks=1 00:38:34.093 00:38:34.093 ' 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:38:34.093 06:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:42.371 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:42.371 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:42.371 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:42.371 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:42.372 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:42.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:42.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:38:42.372 00:38:42.372 --- 10.0.0.2 ping statistics --- 00:38:42.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.372 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:42.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:42.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:38:42.372 00:38:42.372 --- 10.0.0.1 ping statistics --- 00:38:42.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.372 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3119003 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3119003 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3119003 ']' 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:42.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:42.372 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:42.372 [2024-11-20 06:49:01.642616] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:42.372 [2024-11-20 06:49:01.643728] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:38:42.372 [2024-11-20 06:49:01.643780] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:42.372 [2024-11-20 06:49:01.744734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:42.372 [2024-11-20 06:49:01.797197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:42.372 [2024-11-20 06:49:01.797252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:42.372 [2024-11-20 06:49:01.797261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:42.372 [2024-11-20 06:49:01.797268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:42.372 [2024-11-20 06:49:01.797274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:42.372 [2024-11-20 06:49:01.799640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:42.372 [2024-11-20 06:49:01.799801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:42.372 [2024-11-20 06:49:01.799963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:42.372 [2024-11-20 06:49:01.799964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:42.372 [2024-11-20 06:49:01.878656] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:42.372 [2024-11-20 06:49:01.879656] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:42.372 [2024-11-20 06:49:01.879862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:42.372 [2024-11-20 06:49:01.880465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:42.372 [2024-11-20 06:49:01.880509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:42.372 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:42.373 [2024-11-20 06:49:02.508960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:42.373 Malloc0 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:42.373 [2024-11-20 06:49:02.597323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:42.373 { 00:38:42.373 "params": { 00:38:42.373 "name": "Nvme$subsystem", 00:38:42.373 "trtype": "$TEST_TRANSPORT", 00:38:42.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:42.373 "adrfam": "ipv4", 00:38:42.373 "trsvcid": "$NVMF_PORT", 00:38:42.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:42.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:42.373 "hdgst": ${hdgst:-false}, 00:38:42.373 "ddgst": ${ddgst:-false} 00:38:42.373 }, 00:38:42.373 "method": "bdev_nvme_attach_controller" 00:38:42.373 } 00:38:42.373 EOF 00:38:42.373 )") 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:38:42.373 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:42.373 "params": { 00:38:42.373 "name": "Nvme1", 00:38:42.373 "trtype": "tcp", 00:38:42.373 "traddr": "10.0.0.2", 00:38:42.373 "adrfam": "ipv4", 00:38:42.373 "trsvcid": "4420", 00:38:42.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:42.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:42.373 "hdgst": false, 00:38:42.373 "ddgst": false 00:38:42.373 }, 00:38:42.373 "method": "bdev_nvme_attach_controller" 00:38:42.373 }' 00:38:42.634 [2024-11-20 06:49:02.657027] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:38:42.634 [2024-11-20 06:49:02.657100] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3119336 ] 00:38:42.634 [2024-11-20 06:49:02.750523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:42.634 [2024-11-20 06:49:02.808119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:42.634 [2024-11-20 06:49:02.808282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:42.634 [2024-11-20 06:49:02.808449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.894 I/O targets: 00:38:42.894 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:38:42.894 00:38:42.894 00:38:42.894 CUnit - A unit testing framework for C - Version 2.1-3 00:38:42.894 http://cunit.sourceforge.net/ 00:38:42.894 00:38:42.894 00:38:42.894 Suite: bdevio tests on: Nvme1n1 00:38:42.894 Test: blockdev write read block ...passed 00:38:43.155 Test: blockdev write zeroes read block ...passed 00:38:43.155 Test: blockdev write zeroes read no split ...passed 00:38:43.155 Test: blockdev write zeroes read split ...passed 00:38:43.155 Test: blockdev write zeroes read split partial ...passed 00:38:43.155 Test: blockdev reset ...[2024-11-20 06:49:03.268116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:38:43.155 [2024-11-20 06:49:03.268232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8970 (9): Bad file descriptor 00:38:43.155 [2024-11-20 06:49:03.275259] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:38:43.155 passed 00:38:43.155 Test: blockdev write read 8 blocks ...passed 00:38:43.155 Test: blockdev write read size > 128k ...passed 00:38:43.155 Test: blockdev write read invalid size ...passed 00:38:43.155 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:43.155 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:43.155 Test: blockdev write read max offset ...passed 00:38:43.415 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:43.415 Test: blockdev writev readv 8 blocks ...passed 00:38:43.415 Test: blockdev writev readv 30 x 1block ...passed 00:38:43.415 Test: blockdev writev readv block ...passed 00:38:43.415 Test: blockdev writev readv size > 128k ...passed 00:38:43.415 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:43.415 Test: blockdev comparev and writev ...[2024-11-20 06:49:03.583516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:43.415 [2024-11-20 06:49:03.583567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:43.415 [2024-11-20 06:49:03.583584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:43.415 [2024-11-20 06:49:03.583594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:43.415 [2024-11-20 06:49:03.584181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:43.415 [2024-11-20 06:49:03.584196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:43.415 [2024-11-20 06:49:03.584210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:43.415 [2024-11-20 06:49:03.584220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:43.415 [2024-11-20 06:49:03.584821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:43.415 [2024-11-20 06:49:03.584838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:43.415 [2024-11-20 06:49:03.584853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:43.415 [2024-11-20 06:49:03.584862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:43.415 [2024-11-20 06:49:03.585442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:43.416 [2024-11-20 06:49:03.585456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:43.416 [2024-11-20 06:49:03.585470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:43.416 [2024-11-20 06:49:03.585478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:43.416 passed 00:38:43.416 Test: blockdev nvme passthru rw ...passed 00:38:43.416 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:49:03.671034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:43.416 [2024-11-20 06:49:03.671055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:43.416 [2024-11-20 06:49:03.671472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:43.416 [2024-11-20 06:49:03.671493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:43.416 [2024-11-20 06:49:03.671888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:43.416 [2024-11-20 06:49:03.671900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:43.416 [2024-11-20 06:49:03.672312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:43.416 [2024-11-20 06:49:03.672325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:43.416 passed 00:38:43.416 Test: blockdev nvme admin passthru ...passed 00:38:43.676 Test: blockdev copy ...passed 00:38:43.676 00:38:43.676 Run Summary: Type Total Ran Passed Failed Inactive 00:38:43.676 suites 1 1 n/a 0 0 00:38:43.676 tests 23 23 23 0 0 00:38:43.676 asserts 152 152 152 0 n/a 00:38:43.676 00:38:43.676 Elapsed time = 1.283 seconds 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:43.676 rmmod nvme_tcp 00:38:43.676 rmmod nvme_fabrics 00:38:43.676 rmmod nvme_keyring 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3119003 ']' 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3119003 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3119003 ']' 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3119003 00:38:43.676 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:38:43.937 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:43.937 06:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3119003 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3119003' 00:38:43.937 killing process with pid 3119003 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3119003 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3119003 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.937 06:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.505 06:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:46.505 00:38:46.505 real 0m12.417s 00:38:46.505 user 0m10.644s 00:38:46.505 sys 0m6.466s 00:38:46.505 06:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:46.505 06:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:46.505 ************************************ 00:38:46.505 END TEST nvmf_bdevio 00:38:46.505 ************************************ 00:38:46.505 06:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:46.505 00:38:46.505 real 5m0.833s 00:38:46.505 user 10m14.858s 00:38:46.505 sys 2m3.217s 00:38:46.505 06:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:46.505 06:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:46.505 ************************************ 00:38:46.505 END TEST nvmf_target_core_interrupt_mode 00:38:46.505 ************************************ 00:38:46.505 06:49:06 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:46.505 06:49:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:46.505 06:49:06 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:46.505 06:49:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:46.505 ************************************ 00:38:46.505 START TEST nvmf_interrupt 00:38:46.505 ************************************ 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:46.505 * Looking for test storage... 00:38:46.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:38:46.505 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:46.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.506 --rc genhtml_branch_coverage=1 00:38:46.506 --rc genhtml_function_coverage=1 00:38:46.506 --rc genhtml_legend=1 00:38:46.506 --rc geninfo_all_blocks=1 00:38:46.506 --rc geninfo_unexecuted_blocks=1 00:38:46.506 00:38:46.506 ' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:46.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.506 --rc genhtml_branch_coverage=1 00:38:46.506 --rc genhtml_function_coverage=1 00:38:46.506 --rc genhtml_legend=1 00:38:46.506 --rc geninfo_all_blocks=1 00:38:46.506 --rc geninfo_unexecuted_blocks=1 00:38:46.506 00:38:46.506 ' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:46.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.506 --rc genhtml_branch_coverage=1 00:38:46.506 --rc genhtml_function_coverage=1 00:38:46.506 --rc genhtml_legend=1 00:38:46.506 --rc geninfo_all_blocks=1 00:38:46.506 --rc geninfo_unexecuted_blocks=1 00:38:46.506 00:38:46.506 ' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:46.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.506 --rc genhtml_branch_coverage=1 00:38:46.506 --rc genhtml_function_coverage=1 00:38:46.506 --rc genhtml_legend=1 00:38:46.506 --rc geninfo_all_blocks=1 00:38:46.506 --rc geninfo_unexecuted_blocks=1 00:38:46.506 00:38:46.506 ' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:38:46.506 06:49:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:54.650 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:54.650 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:54.651 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:54.651 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:54.651 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:54.651 06:49:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:54.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:54.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:38:54.651 00:38:54.651 --- 10.0.0.2 ping statistics --- 00:38:54.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.651 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:54.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:54.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:38:54.651 00:38:54.651 --- 10.0.0.1 ping statistics --- 00:38:54.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.651 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3123687 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3123687 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 3123687 ']' 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:54.651 06:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:54.651 [2024-11-20 06:49:14.251328] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:54.651 [2024-11-20 06:49:14.252470] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:38:54.651 [2024-11-20 06:49:14.252522] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:54.651 [2024-11-20 06:49:14.350706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:54.651 [2024-11-20 06:49:14.402536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:54.651 [2024-11-20 06:49:14.402584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:54.651 [2024-11-20 06:49:14.402592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:54.651 [2024-11-20 06:49:14.402600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:54.651 [2024-11-20 06:49:14.402606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:54.651 [2024-11-20 06:49:14.404272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:54.651 [2024-11-20 06:49:14.404303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.651 [2024-11-20 06:49:14.481991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:54.651 [2024-11-20 06:49:14.482534] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:54.651 [2024-11-20 06:49:14.482856] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:38:54.912 5000+0 records in 00:38:54.912 5000+0 records out 00:38:54.912 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0195812 s, 523 MB/s 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.912 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:55.173 AIO0 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:55.173 [2024-11-20 06:49:15.197318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:55.173 [2024-11-20 06:49:15.242004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3123687 0 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3123687 0 idle 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3123687 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3123687 -w 256 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3123687 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.32 reactor_0' 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3123687 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.32 reactor_0 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:55.173 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3123687 1 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3123687 1 idle 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3123687 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3123687 -w 256 00:38:55.174 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3123691 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3123691 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3124057 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3123687 0 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3123687 0 busy 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3123687 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3123687 -w 256 00:38:55.434 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:55.695 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3123687 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.32 reactor_0' 00:38:55.695 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3123687 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.32 reactor_0 00:38:55.695 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:55.695 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:55.695 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:55.695 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:55.695 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:55.695 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:55.695 06:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:38:56.636 06:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:38:56.636 06:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:56.636 06:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3123687 -w 256 00:38:56.636 06:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:56.897 06:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3123687 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.53 reactor_0' 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3123687 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.53 reactor_0 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3123687 1 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3123687 1 busy 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3123687 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3123687 -w 256 00:38:56.897 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3123691 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.30 reactor_1' 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3123691 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.30 reactor_1 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:57.157 06:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3124057 00:39:07.150 Initializing NVMe Controllers 00:39:07.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:07.150 Controller IO queue size 256, less than required. 00:39:07.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:07.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:07.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:07.150 Initialization complete. Launching workers. 00:39:07.150 ======================================================== 00:39:07.150 Latency(us) 00:39:07.150 Device Information : IOPS MiB/s Average min max 00:39:07.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19221.20 75.08 13323.05 3873.96 33342.00 00:39:07.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20223.70 79.00 12659.82 7401.52 30519.03 00:39:07.150 ======================================================== 00:39:07.150 Total : 39444.90 154.08 12983.01 3873.96 33342.00 00:39:07.150 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3123687 0 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3123687 0 idle 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3123687 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3123687 -w 256 00:39:07.150 06:49:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3123687 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.31 reactor_0' 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3123687 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.31 reactor_0 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3123687 1 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3123687 1 idle 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3123687 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:07.150 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3123687 -w 256 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3123691 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3123691 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:07.151 06:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:07.151 06:49:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:07.151 06:49:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:39:07.151 06:49:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:39:07.151 06:49:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:39:07.151 06:49:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3123687 0 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3123687 0 idle 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3123687 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3123687 -w 256 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3123687 root 20 0 128.2g 79488 32256 S 6.2 0.1 0:20.70 reactor_0' 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3123687 root 20 0 128.2g 79488 32256 S 6.2 0.1 0:20.70 reactor_0 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3123687 1 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3123687 1 idle 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3123687 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:09.062 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:09.063 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:09.063 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:09.063 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3123687 -w 256 00:39:09.063 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3123691 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3123691 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:09.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:39:09.324 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:09.584 rmmod nvme_tcp 00:39:09.584 rmmod nvme_fabrics 00:39:09.584 rmmod nvme_keyring 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3123687 ']' 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3123687 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 3123687 ']' 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 3123687 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3123687 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3123687' 00:39:09.584 killing process with pid 3123687 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 3123687 00:39:09.584 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 3123687 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:09.844 06:49:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:11.755 06:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:11.755 00:39:11.755 real 0m25.574s 00:39:11.755 user 0m40.263s 00:39:11.755 sys 0m10.059s 00:39:11.755 06:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:11.755 06:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:11.755 ************************************ 00:39:11.755 END TEST nvmf_interrupt 00:39:11.755 ************************************ 00:39:11.755 00:39:11.755 real 30m10.121s 00:39:11.755 user 61m32.730s 00:39:11.755 sys 10m17.514s 00:39:11.755 06:49:32 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:11.755 06:49:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:11.755 ************************************ 00:39:11.755 END TEST nvmf_tcp 00:39:11.755 ************************************ 00:39:12.015 06:49:32 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:39:12.016 06:49:32 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:12.016 06:49:32 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:12.016 06:49:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:12.016 06:49:32 -- common/autotest_common.sh@10 -- # set +x 00:39:12.016 ************************************ 00:39:12.016 START TEST spdkcli_nvmf_tcp 00:39:12.016 ************************************ 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:12.016 * Looking for test storage... 00:39:12.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:12.016 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:12.276 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:12.276 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:12.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.277 --rc genhtml_branch_coverage=1 00:39:12.277 --rc genhtml_function_coverage=1 00:39:12.277 --rc genhtml_legend=1 00:39:12.277 --rc geninfo_all_blocks=1 00:39:12.277 --rc geninfo_unexecuted_blocks=1 00:39:12.277 00:39:12.277 ' 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:12.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.277 --rc genhtml_branch_coverage=1 00:39:12.277 --rc genhtml_function_coverage=1 00:39:12.277 --rc genhtml_legend=1 00:39:12.277 --rc geninfo_all_blocks=1 00:39:12.277 --rc geninfo_unexecuted_blocks=1 00:39:12.277 00:39:12.277 ' 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:12.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.277 --rc genhtml_branch_coverage=1 00:39:12.277 --rc genhtml_function_coverage=1 00:39:12.277 --rc genhtml_legend=1 00:39:12.277 --rc geninfo_all_blocks=1 00:39:12.277 --rc geninfo_unexecuted_blocks=1 00:39:12.277 00:39:12.277 ' 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:12.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.277 --rc genhtml_branch_coverage=1 00:39:12.277 --rc genhtml_function_coverage=1 00:39:12.277 --rc genhtml_legend=1 00:39:12.277 --rc geninfo_all_blocks=1 00:39:12.277 --rc geninfo_unexecuted_blocks=1 00:39:12.277 00:39:12.277 ' 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:12.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3127245 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3127245 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 3127245 ']' 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:12.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:12.277 06:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:12.277 [2024-11-20 06:49:32.405650] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:39:12.277 [2024-11-20 06:49:32.405722] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127245 ] 00:39:12.277 [2024-11-20 06:49:32.497680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:12.277 [2024-11-20 06:49:32.552095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:12.277 [2024-11-20 06:49:32.552101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:13.219 06:49:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:13.219 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:13.219 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:13.219 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:13.219 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:13.219 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:13.219 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:13.219 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:13.219 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:13.219 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:13.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:13.219 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:13.219 ' 00:39:15.759 [2024-11-20 06:49:35.967227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.143 [2024-11-20 06:49:37.323403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:19.687 [2024-11-20 06:49:39.842436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:22.230 [2024-11-20 06:49:42.064811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:23.612 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:23.612 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:23.612 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:23.612 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:23.612 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:23.612 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:23.612 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:23.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:23.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:23.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:23.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:23.612 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:23.612 06:49:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:23.612 06:49:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:23.612 06:49:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:23.612 06:49:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:23.612 06:49:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:23.612 06:49:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:23.871 06:49:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:23.871 06:49:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:24.131 06:49:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:24.131 06:49:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:24.131 06:49:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:24.131 06:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:24.131 06:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:24.131 06:49:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:24.131 06:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:24.131 06:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:24.131 06:49:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:24.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:24.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:24.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:24.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:24.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:24.131 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:24.131 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:24.131 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:24.131 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:24.131 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:24.131 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:24.131 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:24.131 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:24.131 ' 00:39:30.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:30.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:30.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:30.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:30.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:30.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:30.712 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:30.712 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:30.712 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:30.712 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:30.712 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:30.712 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:30.712 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:30.712 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3127245 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3127245 ']' 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3127245 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3127245 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3127245' 00:39:30.712 killing process with pid 3127245 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 3127245 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 3127245 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3127245 ']' 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3127245 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3127245 ']' 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3127245 00:39:30.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3127245) - No such process 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 3127245 is not found' 00:39:30.712 Process with pid 3127245 is not found 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:30.712 00:39:30.712 real 0m18.147s 00:39:30.712 user 0m40.255s 00:39:30.712 sys 0m0.914s 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:30.712 06:49:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:30.712 ************************************ 00:39:30.712 END TEST spdkcli_nvmf_tcp 00:39:30.712 ************************************ 00:39:30.712 06:49:50 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:30.712 06:49:50 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:30.712 06:49:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:30.712 06:49:50 -- common/autotest_common.sh@10 -- # set +x 00:39:30.712 ************************************ 00:39:30.712 START TEST nvmf_identify_passthru 00:39:30.712 ************************************ 00:39:30.712 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:30.712 * Looking for test storage... 00:39:30.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:30.712 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:30.712 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:39:30.712 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:30.712 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:30.712 06:49:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:39:30.712 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:30.712 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:30.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.712 --rc genhtml_branch_coverage=1 00:39:30.712 --rc genhtml_function_coverage=1 00:39:30.712 --rc genhtml_legend=1 00:39:30.712 --rc geninfo_all_blocks=1 00:39:30.712 --rc geninfo_unexecuted_blocks=1 00:39:30.713 00:39:30.713 ' 00:39:30.713 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:30.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.713 --rc genhtml_branch_coverage=1 00:39:30.713 --rc genhtml_function_coverage=1 00:39:30.713 --rc genhtml_legend=1 00:39:30.713 --rc geninfo_all_blocks=1 00:39:30.713 --rc geninfo_unexecuted_blocks=1 00:39:30.713 00:39:30.713 ' 00:39:30.713 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:30.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.713 --rc genhtml_branch_coverage=1 00:39:30.713 --rc genhtml_function_coverage=1 00:39:30.713 --rc genhtml_legend=1 00:39:30.713 --rc geninfo_all_blocks=1 00:39:30.713 --rc geninfo_unexecuted_blocks=1 00:39:30.713 00:39:30.713 ' 00:39:30.713 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:30.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.713 --rc genhtml_branch_coverage=1 00:39:30.713 --rc genhtml_function_coverage=1 00:39:30.713 --rc genhtml_legend=1 00:39:30.713 --rc geninfo_all_blocks=1 00:39:30.713 --rc geninfo_unexecuted_blocks=1 00:39:30.713 00:39:30.713 ' 00:39:30.713 06:49:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:30.713 06:49:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:30.713 06:49:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:30.713 06:49:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:30.713 06:49:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:30.713 06:49:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.713 06:49:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.713 06:49:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.713 06:49:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:30.713 06:49:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:30.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:30.713 06:49:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:30.713 06:49:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:30.713 06:49:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:30.713 06:49:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:30.713 06:49:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:30.713 06:49:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.713 06:49:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.713 06:49:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.713 06:49:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:30.713 06:49:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.713 06:49:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.713 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:30.713 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:30.713 06:49:50 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:39:30.713 06:49:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:38.855 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:38.855 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:38.855 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:38.855 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:38.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:38.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:39:38.855 00:39:38.855 --- 10.0.0.2 ping statistics --- 00:39:38.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.855 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:39:38.855 06:49:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:38.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:38.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:39:38.855 00:39:38.855 --- 10.0.0.1 ping statistics --- 00:39:38.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.855 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:39:38.855 06:49:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:38.855 06:49:58 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:39:38.855 06:49:58 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:38.855 06:49:58 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:38.856 06:49:58 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:38.856 06:49:58 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:38.856 06:49:58 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:38.856 06:49:58 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:38.856 06:49:58 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:39:38.856 06:49:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:39:38.856 06:49:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:39:39.127 06:49:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:39:39.127 06:49:59 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:39:39.127 06:49:59 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:39.127 06:49:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:39.127 06:49:59 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:39:39.127 06:49:59 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:39.127 06:49:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:39.127 06:49:59 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3134656 00:39:39.127 06:49:59 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:39.127 06:49:59 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:39.127 06:49:59 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3134656 00:39:39.127 06:49:59 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 3134656 ']' 00:39:39.127 06:49:59 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.127 06:49:59 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:39.127 06:49:59 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.127 06:49:59 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:39.127 06:49:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:39.127 [2024-11-20 06:49:59.289518] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:39:39.127 [2024-11-20 06:49:59.289587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:39.127 [2024-11-20 06:49:59.389740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:39.388 [2024-11-20 06:49:59.443287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:39.388 [2024-11-20 06:49:59.443339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:39.388 [2024-11-20 06:49:59.443348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:39.388 [2024-11-20 06:49:59.443356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:39.388 [2024-11-20 06:49:59.443362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:39.388 [2024-11-20 06:49:59.445357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:39.388 [2024-11-20 06:49:59.445519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:39.388 [2024-11-20 06:49:59.445684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.388 [2024-11-20 06:49:59.445685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:39.962 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:39.962 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:39:39.962 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:39:39.962 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.962 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:39.962 INFO: Log level set to 20 00:39:39.962 INFO: Requests: 00:39:39.962 { 00:39:39.962 "jsonrpc": "2.0", 00:39:39.962 "method": "nvmf_set_config", 00:39:39.962 "id": 1, 00:39:39.962 "params": { 00:39:39.962 "admin_cmd_passthru": { 00:39:39.962 "identify_ctrlr": true 00:39:39.962 } 00:39:39.962 } 00:39:39.962 } 00:39:39.962 00:39:39.962 INFO: response: 00:39:39.962 { 00:39:39.962 "jsonrpc": "2.0", 00:39:39.962 "id": 1, 00:39:39.962 "result": true 00:39:39.962 } 00:39:39.962 00:39:39.962 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.962 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:39:39.962 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.962 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:39.962 INFO: Setting log level to 20 00:39:39.962 INFO: Setting log level to 20 00:39:39.962 INFO: Log level set to 20 00:39:39.962 INFO: Log level set to 20 00:39:39.962 INFO: Requests: 00:39:39.962 { 00:39:39.962 "jsonrpc": "2.0", 00:39:39.962 "method": "framework_start_init", 00:39:39.962 "id": 1 00:39:39.962 } 00:39:39.962 00:39:39.962 INFO: Requests: 00:39:39.962 { 00:39:39.962 "jsonrpc": "2.0", 00:39:39.962 "method": "framework_start_init", 00:39:39.962 "id": 1 00:39:39.962 } 00:39:39.962 00:39:39.962 [2024-11-20 06:50:00.217175] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:39:39.962 INFO: response: 00:39:39.962 { 00:39:39.962 "jsonrpc": "2.0", 00:39:39.962 "id": 1, 00:39:39.962 "result": true 00:39:39.962 } 00:39:39.962 00:39:39.962 INFO: response: 00:39:39.962 { 00:39:39.962 "jsonrpc": "2.0", 00:39:39.962 "id": 1, 00:39:39.962 "result": true 00:39:39.962 } 00:39:39.962 00:39:39.962 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.962 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:39.962 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.962 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:39.962 INFO: Setting log level to 40 00:39:39.962 INFO: Setting log level to 40 00:39:39.962 INFO: Setting log level to 40 00:39:39.962 [2024-11-20 06:50:00.230776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:40.224 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.224 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:39:40.224 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:40.224 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:40.224 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:39:40.224 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.224 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:40.486 Nvme0n1 00:39:40.486 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.486 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:39:40.486 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.486 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:40.486 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.486 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:39:40.486 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.486 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:40.486 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.486 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:40.486 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.486 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:40.486 [2024-11-20 06:50:00.633083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:40.487 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.487 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:39:40.487 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.487 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:40.487 [ 00:39:40.487 { 00:39:40.487 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:40.487 "subtype": "Discovery", 00:39:40.487 "listen_addresses": [], 00:39:40.487 "allow_any_host": true, 00:39:40.487 "hosts": [] 00:39:40.487 }, 00:39:40.487 { 00:39:40.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:40.487 "subtype": "NVMe", 00:39:40.487 "listen_addresses": [ 00:39:40.487 { 00:39:40.487 "trtype": "TCP", 00:39:40.487 "adrfam": "IPv4", 00:39:40.487 "traddr": "10.0.0.2", 00:39:40.487 "trsvcid": "4420" 00:39:40.487 } 00:39:40.487 ], 00:39:40.487 "allow_any_host": true, 00:39:40.487 "hosts": [], 00:39:40.487 "serial_number": "SPDK00000000000001", 00:39:40.487 "model_number": "SPDK bdev Controller", 00:39:40.487 "max_namespaces": 1, 00:39:40.487 "min_cntlid": 1, 00:39:40.487 "max_cntlid": 65519, 00:39:40.487 "namespaces": [ 00:39:40.487 { 00:39:40.487 "nsid": 1, 00:39:40.487 "bdev_name": "Nvme0n1", 00:39:40.487 "name": "Nvme0n1", 00:39:40.487 "nguid": "36344730526054870025384500000044", 00:39:40.487 "uuid": "36344730-5260-5487-0025-384500000044" 00:39:40.487 } 00:39:40.487 ] 00:39:40.487 } 00:39:40.487 ] 00:39:40.487 06:50:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.487 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:40.487 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:39:40.487 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:39:40.748 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:39:40.748 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:40.748 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:39:40.748 06:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:39:40.748 06:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:39:40.748 06:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:39:40.748 06:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:39:40.748 06:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:40.748 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.748 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:40.748 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.748 06:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:39:40.748 06:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:39:40.748 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:40.748 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:39:41.008 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:41.008 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:39:41.008 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:41.008 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:41.008 rmmod nvme_tcp 00:39:41.008 rmmod nvme_fabrics 00:39:41.008 rmmod nvme_keyring 00:39:41.008 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:41.008 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:39:41.008 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:39:41.008 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3134656 ']' 00:39:41.008 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3134656 00:39:41.008 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 3134656 ']' 00:39:41.008 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 3134656 00:39:41.008 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:39:41.008 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:41.008 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3134656 00:39:41.008 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:41.008 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:41.008 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3134656' 00:39:41.008 killing process with pid 3134656 00:39:41.008 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 3134656 00:39:41.008 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 3134656 00:39:41.269 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:41.269 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:41.269 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:41.269 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:39:41.269 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:39:41.269 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:41.269 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:39:41.269 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:41.269 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:41.269 06:50:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:41.269 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:41.269 06:50:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:43.814 06:50:03 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:43.814 00:39:43.814 real 0m13.211s 00:39:43.814 user 0m10.243s 00:39:43.814 sys 0m6.798s 00:39:43.814 06:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:43.814 06:50:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:43.814 ************************************ 00:39:43.814 END TEST nvmf_identify_passthru 00:39:43.814 ************************************ 00:39:43.814 06:50:03 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:43.814 06:50:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:43.814 06:50:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:43.814 06:50:03 -- common/autotest_common.sh@10 -- # set +x 00:39:43.814 ************************************ 00:39:43.814 START TEST nvmf_dif 00:39:43.814 ************************************ 00:39:43.814 06:50:03 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:43.814 * Looking for test storage... 00:39:43.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:43.814 06:50:03 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:43.814 06:50:03 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:39:43.814 06:50:03 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:43.814 06:50:03 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:43.814 06:50:03 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:39:43.814 06:50:03 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:43.814 06:50:03 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:43.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.814 --rc genhtml_branch_coverage=1 00:39:43.814 --rc genhtml_function_coverage=1 00:39:43.814 --rc genhtml_legend=1 00:39:43.814 --rc geninfo_all_blocks=1 00:39:43.814 --rc geninfo_unexecuted_blocks=1 00:39:43.814 00:39:43.814 ' 00:39:43.814 06:50:03 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:43.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.814 --rc genhtml_branch_coverage=1 00:39:43.814 --rc genhtml_function_coverage=1 00:39:43.814 --rc genhtml_legend=1 00:39:43.814 --rc geninfo_all_blocks=1 00:39:43.814 --rc geninfo_unexecuted_blocks=1 00:39:43.814 00:39:43.814 ' 00:39:43.814 06:50:03 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:43.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.814 --rc genhtml_branch_coverage=1 00:39:43.814 --rc genhtml_function_coverage=1 00:39:43.814 --rc genhtml_legend=1 00:39:43.814 --rc geninfo_all_blocks=1 00:39:43.814 --rc geninfo_unexecuted_blocks=1 00:39:43.814 00:39:43.814 ' 00:39:43.814 06:50:03 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:43.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.814 --rc genhtml_branch_coverage=1 00:39:43.814 --rc genhtml_function_coverage=1 00:39:43.814 --rc genhtml_legend=1 00:39:43.814 --rc geninfo_all_blocks=1 00:39:43.814 --rc geninfo_unexecuted_blocks=1 00:39:43.815 00:39:43.815 ' 00:39:43.815 06:50:03 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:43.815 06:50:03 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:39:43.815 06:50:03 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:43.815 06:50:03 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:43.815 06:50:03 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:43.815 06:50:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.815 06:50:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.815 06:50:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.815 06:50:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:43.815 06:50:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:43.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:43.815 06:50:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:43.815 06:50:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:43.815 06:50:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:43.815 06:50:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:43.815 06:50:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:43.815 06:50:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:43.815 06:50:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:43.815 06:50:03 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:39:43.815 06:50:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:51.951 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:51.951 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:51.951 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:51.951 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:51.951 06:50:10 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:51.951 06:50:11 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:51.951 06:50:11 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:51.951 06:50:11 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:51.951 06:50:11 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:51.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:51.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:39:51.951 00:39:51.951 --- 10.0.0.2 ping statistics --- 00:39:51.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:51.951 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:39:51.951 06:50:11 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:51.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:51.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:39:51.951 00:39:51.951 --- 10.0.0.1 ping statistics --- 00:39:51.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:51.951 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:39:51.951 06:50:11 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:51.951 06:50:11 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:39:51.951 06:50:11 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:51.951 06:50:11 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:54.064 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:39:54.064 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:54.064 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:54.325 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:54.325 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:54.325 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:54.325 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:54.585 06:50:14 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:54.585 06:50:14 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:54.585 06:50:14 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:54.585 06:50:14 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:54.585 06:50:14 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:54.585 06:50:14 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:54.585 06:50:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:39:54.585 06:50:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:39:54.585 06:50:14 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:54.585 06:50:14 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:54.585 06:50:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:54.585 06:50:14 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3140771 00:39:54.586 06:50:14 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3140771 00:39:54.586 06:50:14 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:54.586 06:50:14 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 3140771 ']' 00:39:54.586 06:50:14 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:54.586 06:50:14 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:54.586 06:50:14 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:54.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:54.586 06:50:14 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:54.586 06:50:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:54.586 [2024-11-20 06:50:14.802950] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:39:54.586 [2024-11-20 06:50:14.803000] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:54.847 [2024-11-20 06:50:14.898875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.847 [2024-11-20 06:50:14.935679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:54.847 [2024-11-20 06:50:14.935716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:54.847 [2024-11-20 06:50:14.935724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:54.847 [2024-11-20 06:50:14.935731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:54.847 [2024-11-20 06:50:14.935737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:54.847 [2024-11-20 06:50:14.936346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.418 06:50:15 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:55.418 06:50:15 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:39:55.418 06:50:15 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:55.418 06:50:15 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:55.418 06:50:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:55.418 06:50:15 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:55.418 06:50:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:39:55.418 06:50:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:39:55.418 06:50:15 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.418 06:50:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:55.418 [2024-11-20 06:50:15.672321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:55.418 06:50:15 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.418 06:50:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:39:55.418 06:50:15 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:55.418 06:50:15 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:55.418 06:50:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:55.679 ************************************ 00:39:55.679 START TEST fio_dif_1_default 00:39:55.679 ************************************ 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:55.679 bdev_null0 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.679 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:55.680 [2024-11-20 06:50:15.764800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:55.680 { 00:39:55.680 "params": { 00:39:55.680 "name": "Nvme$subsystem", 00:39:55.680 "trtype": "$TEST_TRANSPORT", 00:39:55.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:55.680 "adrfam": "ipv4", 00:39:55.680 "trsvcid": "$NVMF_PORT", 00:39:55.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:55.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:55.680 "hdgst": ${hdgst:-false}, 00:39:55.680 "ddgst": ${ddgst:-false} 00:39:55.680 }, 00:39:55.680 "method": "bdev_nvme_attach_controller" 00:39:55.680 } 00:39:55.680 EOF 00:39:55.680 )") 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:55.680 "params": { 00:39:55.680 "name": "Nvme0", 00:39:55.680 "trtype": "tcp", 00:39:55.680 "traddr": "10.0.0.2", 00:39:55.680 "adrfam": "ipv4", 00:39:55.680 "trsvcid": "4420", 00:39:55.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:55.680 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:55.680 "hdgst": false, 00:39:55.680 "ddgst": false 00:39:55.680 }, 00:39:55.680 "method": "bdev_nvme_attach_controller" 00:39:55.680 }' 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:55.680 06:50:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:56.251 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:56.251 fio-3.35 00:39:56.251 Starting 1 thread 00:40:08.487 00:40:08.487 filename0: (groupid=0, jobs=1): err= 0: pid=3141359: Wed Nov 20 06:50:26 2024 00:40:08.487 read: IOPS=97, BW=390KiB/s (400kB/s)(3920KiB/10040msec) 00:40:08.487 slat (nsec): min=5487, max=57761, avg=6366.96, stdev=2297.62 00:40:08.487 clat (usec): min=961, max=43795, avg=40960.82, stdev=2590.28 00:40:08.487 lat (usec): min=969, max=43832, avg=40967.18, stdev=2590.31 00:40:08.487 clat percentiles (usec): 00:40:08.487 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:08.487 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:08.487 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:40:08.487 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:40:08.487 | 99.99th=[43779] 00:40:08.487 bw ( KiB/s): min= 352, max= 416, per=99.89%, avg=390.40, stdev=16.74, samples=20 00:40:08.487 iops : min= 88, max= 104, avg=97.60, stdev= 4.19, samples=20 00:40:08.487 lat (usec) : 1000=0.41% 00:40:08.487 lat (msec) : 50=99.59% 00:40:08.487 cpu : usr=93.52%, sys=6.23%, ctx=12, majf=0, minf=187 00:40:08.487 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:08.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.487 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:08.487 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:08.487 00:40:08.487 Run status group 0 (all jobs): 00:40:08.487 READ: bw=390KiB/s (400kB/s), 390KiB/s-390KiB/s (400kB/s-400kB/s), io=3920KiB (4014kB), run=10040-10040msec 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.487 00:40:08.487 real 0m11.331s 00:40:08.487 user 0m27.008s 00:40:08.487 sys 0m1.037s 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 ************************************ 00:40:08.487 END TEST fio_dif_1_default 00:40:08.487 ************************************ 00:40:08.487 06:50:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:08.487 06:50:27 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:08.487 06:50:27 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 ************************************ 00:40:08.487 START TEST fio_dif_1_multi_subsystems 00:40:08.487 ************************************ 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 bdev_null0 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 [2024-11-20 06:50:27.179125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 bdev_null1 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:08.487 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:08.488 { 00:40:08.488 "params": { 00:40:08.488 "name": "Nvme$subsystem", 00:40:08.488 "trtype": "$TEST_TRANSPORT", 00:40:08.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:08.488 "adrfam": "ipv4", 00:40:08.488 "trsvcid": "$NVMF_PORT", 00:40:08.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:08.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:08.488 "hdgst": ${hdgst:-false}, 00:40:08.488 "ddgst": ${ddgst:-false} 00:40:08.488 }, 00:40:08.488 "method": "bdev_nvme_attach_controller" 00:40:08.488 } 00:40:08.488 EOF 00:40:08.488 )") 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:08.488 { 00:40:08.488 "params": { 00:40:08.488 "name": "Nvme$subsystem", 00:40:08.488 "trtype": "$TEST_TRANSPORT", 00:40:08.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:08.488 "adrfam": "ipv4", 00:40:08.488 "trsvcid": "$NVMF_PORT", 00:40:08.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:08.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:08.488 "hdgst": ${hdgst:-false}, 00:40:08.488 "ddgst": ${ddgst:-false} 00:40:08.488 }, 00:40:08.488 "method": "bdev_nvme_attach_controller" 00:40:08.488 } 00:40:08.488 EOF 00:40:08.488 )") 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:08.488 "params": { 00:40:08.488 "name": "Nvme0", 00:40:08.488 "trtype": "tcp", 00:40:08.488 "traddr": "10.0.0.2", 00:40:08.488 "adrfam": "ipv4", 00:40:08.488 "trsvcid": "4420", 00:40:08.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:08.488 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:08.488 "hdgst": false, 00:40:08.488 "ddgst": false 00:40:08.488 }, 00:40:08.488 "method": "bdev_nvme_attach_controller" 00:40:08.488 },{ 00:40:08.488 "params": { 00:40:08.488 "name": "Nvme1", 00:40:08.488 "trtype": "tcp", 00:40:08.488 "traddr": "10.0.0.2", 00:40:08.488 "adrfam": "ipv4", 00:40:08.488 "trsvcid": "4420", 00:40:08.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:08.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:08.488 "hdgst": false, 00:40:08.488 "ddgst": false 00:40:08.488 }, 00:40:08.488 "method": "bdev_nvme_attach_controller" 00:40:08.488 }' 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:08.488 06:50:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:08.488 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:08.488 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:08.488 fio-3.35 00:40:08.488 Starting 2 threads 00:40:18.483 00:40:18.483 filename0: (groupid=0, jobs=1): err= 0: pid=3143560: Wed Nov 20 06:50:38 2024 00:40:18.483 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10034msec) 00:40:18.483 slat (nsec): min=5497, max=31573, avg=6352.86, stdev=1713.58 00:40:18.483 clat (usec): min=40847, max=42465, avg=41103.79, stdev=333.28 00:40:18.483 lat (usec): min=40853, max=42496, avg=41110.14, stdev=333.48 00:40:18.483 clat percentiles (usec): 00:40:18.483 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:18.483 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:18.483 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:40:18.483 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:18.483 | 99.99th=[42206] 00:40:18.483 bw ( KiB/s): min= 384, max= 416, per=33.75%, avg=388.80, stdev=11.72, samples=20 00:40:18.483 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:40:18.483 lat (msec) : 50=100.00% 00:40:18.483 cpu : usr=95.49%, sys=4.31%, ctx=8, majf=0, minf=85 00:40:18.483 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:18.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:18.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:18.483 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:18.483 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:18.483 filename1: (groupid=0, jobs=1): err= 0: pid=3143561: Wed Nov 20 06:50:38 2024 00:40:18.483 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10004msec) 00:40:18.483 slat (nsec): min=5492, max=41842, avg=6327.88, stdev=1595.67 00:40:18.483 clat (usec): min=502, max=42373, avg=20954.28, stdev=20200.90 00:40:18.483 lat (usec): min=510, max=42379, avg=20960.61, stdev=20200.86 00:40:18.483 clat percentiles (usec): 00:40:18.483 | 1.00th=[ 570], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 783], 00:40:18.483 | 30.00th=[ 816], 40.00th=[ 840], 50.00th=[ 1090], 60.00th=[41157], 00:40:18.483 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:18.483 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:40:18.483 | 99.99th=[42206] 00:40:18.483 bw ( KiB/s): min= 704, max= 832, per=66.45%, avg=764.63, stdev=25.90, samples=19 00:40:18.483 iops : min= 176, max= 208, avg=191.16, stdev= 6.47, samples=19 00:40:18.483 lat (usec) : 750=3.35%, 1000=45.65% 00:40:18.483 lat (msec) : 2=1.10%, 50=49.90% 00:40:18.483 cpu : usr=95.55%, sys=4.25%, ctx=10, majf=0, minf=176 00:40:18.483 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:18.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:18.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:18.484 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:18.484 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:18.484 00:40:18.484 Run status group 0 (all jobs): 00:40:18.484 READ: bw=1150KiB/s (1177kB/s), 389KiB/s-763KiB/s (398kB/s-781kB/s), io=11.3MiB (11.8MB), run=10004-10034msec 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.484 00:40:18.484 real 0m11.505s 00:40:18.484 user 0m35.157s 00:40:18.484 sys 0m1.227s 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:18.484 06:50:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:18.484 ************************************ 00:40:18.484 END TEST fio_dif_1_multi_subsystems 00:40:18.484 ************************************ 00:40:18.484 06:50:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:18.484 06:50:38 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:18.484 06:50:38 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:18.484 06:50:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:18.484 ************************************ 00:40:18.484 START TEST fio_dif_rand_params 00:40:18.484 ************************************ 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:18.484 bdev_null0 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.484 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:18.744 [2024-11-20 06:50:38.763218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:18.744 { 00:40:18.744 "params": { 00:40:18.744 "name": "Nvme$subsystem", 00:40:18.744 "trtype": "$TEST_TRANSPORT", 00:40:18.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:18.744 "adrfam": "ipv4", 00:40:18.744 "trsvcid": "$NVMF_PORT", 00:40:18.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:18.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:18.744 "hdgst": ${hdgst:-false}, 00:40:18.744 "ddgst": ${ddgst:-false} 00:40:18.744 }, 00:40:18.744 "method": "bdev_nvme_attach_controller" 00:40:18.744 } 00:40:18.744 EOF 00:40:18.744 )") 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:18.744 "params": { 00:40:18.744 "name": "Nvme0", 00:40:18.744 "trtype": "tcp", 00:40:18.744 "traddr": "10.0.0.2", 00:40:18.744 "adrfam": "ipv4", 00:40:18.744 "trsvcid": "4420", 00:40:18.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:18.744 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:18.744 "hdgst": false, 00:40:18.744 "ddgst": false 00:40:18.744 }, 00:40:18.744 "method": "bdev_nvme_attach_controller" 00:40:18.744 }' 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:18.744 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:18.745 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:18.745 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:18.745 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:18.745 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:18.745 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:18.745 06:50:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:19.005 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:19.005 ... 00:40:19.005 fio-3.35 00:40:19.005 Starting 3 threads 00:40:25.583 00:40:25.583 filename0: (groupid=0, jobs=1): err= 0: pid=3145912: Wed Nov 20 06:50:44 2024 00:40:25.583 read: IOPS=323, BW=40.4MiB/s (42.4MB/s)(204MiB/5047msec) 00:40:25.583 slat (nsec): min=5881, max=38329, avg=8801.23, stdev=2203.85 00:40:25.583 clat (usec): min=4128, max=48709, avg=9235.25, stdev=3326.04 00:40:25.583 lat (usec): min=4136, max=48716, avg=9244.05, stdev=3326.16 00:40:25.583 clat percentiles (usec): 00:40:25.583 | 1.00th=[ 4752], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 8225], 00:40:25.583 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:40:25.583 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:40:25.583 | 99.00th=[11207], 99.50th=[45876], 99.90th=[48497], 99.95th=[48497], 00:40:25.583 | 99.99th=[48497] 00:40:25.583 bw ( KiB/s): min=38656, max=46080, per=34.60%, avg=41753.60, stdev=2074.32, samples=10 00:40:25.583 iops : min= 302, max= 360, avg=326.20, stdev=16.21, samples=10 00:40:25.583 lat (msec) : 10=83.04%, 20=16.29%, 50=0.67% 00:40:25.583 cpu : usr=95.56%, sys=4.18%, ctx=8, majf=0, minf=105 00:40:25.583 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.583 issued rwts: total=1633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.583 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:25.583 filename0: (groupid=0, jobs=1): err= 0: pid=3145913: Wed Nov 20 06:50:44 2024 00:40:25.583 read: IOPS=321, BW=40.2MiB/s (42.2MB/s)(203MiB/5044msec) 00:40:25.583 slat (nsec): min=6052, max=41680, avg=9290.98, stdev=1801.96 00:40:25.583 clat (usec): min=5191, max=49387, avg=9285.50, stdev=4343.68 00:40:25.583 lat (usec): min=5204, max=49394, avg=9294.79, stdev=4343.83 00:40:25.583 clat percentiles (usec): 00:40:25.583 | 1.00th=[ 5997], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 8029], 00:40:25.583 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:40:25.583 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10290], 00:40:25.583 | 99.00th=[45876], 99.50th=[46924], 99.90th=[48497], 99.95th=[49546], 00:40:25.583 | 99.99th=[49546] 00:40:25.583 bw ( KiB/s): min=27136, max=47616, per=34.39%, avg=41497.60, stdev=5352.86, samples=10 00:40:25.583 iops : min= 212, max= 372, avg=324.20, stdev=41.82, samples=10 00:40:25.583 lat (msec) : 10=91.13%, 20=7.64%, 50=1.23% 00:40:25.583 cpu : usr=94.15%, sys=5.57%, ctx=9, majf=0, minf=84 00:40:25.583 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.583 issued rwts: total=1623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.583 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:25.583 filename0: (groupid=0, jobs=1): err= 0: pid=3145914: Wed Nov 20 06:50:44 2024 00:40:25.584 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(188MiB/5046msec) 00:40:25.584 slat (nsec): min=5559, max=41886, avg=8681.45, stdev=2417.62 00:40:25.584 clat (usec): min=5953, max=50729, avg=10038.17, stdev=5709.46 00:40:25.584 lat (usec): min=5960, max=50739, avg=10046.85, stdev=5709.71 00:40:25.584 clat percentiles (usec): 00:40:25.584 | 1.00th=[ 6849], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8291], 00:40:25.584 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:40:25.584 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:40:25.584 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50070], 99.95th=[50594], 00:40:25.584 | 99.99th=[50594] 00:40:25.584 bw ( KiB/s): min=13312, max=44288, per=31.82%, avg=38400.00, stdev=8994.47, samples=10 00:40:25.584 iops : min= 104, max= 346, avg=300.00, stdev=70.27, samples=10 00:40:25.584 lat (msec) : 10=76.50%, 20=21.37%, 50=1.86%, 100=0.27% 00:40:25.584 cpu : usr=94.81%, sys=4.60%, ctx=268, majf=0, minf=120 00:40:25.584 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.584 issued rwts: total=1502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.584 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:25.584 00:40:25.584 Run status group 0 (all jobs): 00:40:25.584 READ: bw=118MiB/s (124MB/s), 37.2MiB/s-40.4MiB/s (39.0MB/s-42.4MB/s), io=595MiB (624MB), run=5044-5047msec 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 bdev_null0 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 [2024-11-20 06:50:45.022397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 bdev_null1 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 bdev_null2 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:25.584 { 00:40:25.584 "params": { 00:40:25.584 "name": "Nvme$subsystem", 00:40:25.584 "trtype": "$TEST_TRANSPORT", 00:40:25.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:25.584 "adrfam": "ipv4", 00:40:25.584 "trsvcid": "$NVMF_PORT", 00:40:25.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:25.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:25.584 "hdgst": ${hdgst:-false}, 00:40:25.584 "ddgst": ${ddgst:-false} 00:40:25.584 }, 00:40:25.584 "method": "bdev_nvme_attach_controller" 00:40:25.584 } 00:40:25.584 EOF 00:40:25.584 )") 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:25.584 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:25.585 { 00:40:25.585 "params": { 00:40:25.585 "name": "Nvme$subsystem", 00:40:25.585 "trtype": "$TEST_TRANSPORT", 00:40:25.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:25.585 "adrfam": "ipv4", 00:40:25.585 "trsvcid": "$NVMF_PORT", 00:40:25.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:25.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:25.585 "hdgst": ${hdgst:-false}, 00:40:25.585 "ddgst": ${ddgst:-false} 00:40:25.585 }, 00:40:25.585 "method": "bdev_nvme_attach_controller" 00:40:25.585 } 00:40:25.585 EOF 00:40:25.585 )") 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:25.585 { 00:40:25.585 "params": { 00:40:25.585 "name": "Nvme$subsystem", 00:40:25.585 "trtype": "$TEST_TRANSPORT", 00:40:25.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:25.585 "adrfam": "ipv4", 00:40:25.585 "trsvcid": "$NVMF_PORT", 00:40:25.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:25.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:25.585 "hdgst": ${hdgst:-false}, 00:40:25.585 "ddgst": ${ddgst:-false} 00:40:25.585 }, 00:40:25.585 "method": "bdev_nvme_attach_controller" 00:40:25.585 } 00:40:25.585 EOF 00:40:25.585 )") 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:25.585 "params": { 00:40:25.585 "name": "Nvme0", 00:40:25.585 "trtype": "tcp", 00:40:25.585 "traddr": "10.0.0.2", 00:40:25.585 "adrfam": "ipv4", 00:40:25.585 "trsvcid": "4420", 00:40:25.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:25.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:25.585 "hdgst": false, 00:40:25.585 "ddgst": false 00:40:25.585 }, 00:40:25.585 "method": "bdev_nvme_attach_controller" 00:40:25.585 },{ 00:40:25.585 "params": { 00:40:25.585 "name": "Nvme1", 00:40:25.585 "trtype": "tcp", 00:40:25.585 "traddr": "10.0.0.2", 00:40:25.585 "adrfam": "ipv4", 00:40:25.585 "trsvcid": "4420", 00:40:25.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:25.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:25.585 "hdgst": false, 00:40:25.585 "ddgst": false 00:40:25.585 }, 00:40:25.585 "method": "bdev_nvme_attach_controller" 00:40:25.585 },{ 00:40:25.585 "params": { 00:40:25.585 "name": "Nvme2", 00:40:25.585 "trtype": "tcp", 00:40:25.585 "traddr": "10.0.0.2", 00:40:25.585 "adrfam": "ipv4", 00:40:25.585 "trsvcid": "4420", 00:40:25.585 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:25.585 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:25.585 "hdgst": false, 00:40:25.585 "ddgst": false 00:40:25.585 }, 00:40:25.585 "method": "bdev_nvme_attach_controller" 00:40:25.585 }' 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:25.585 06:50:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:25.585 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:25.585 ... 00:40:25.585 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:25.585 ... 00:40:25.585 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:25.585 ... 00:40:25.585 fio-3.35 00:40:25.585 Starting 24 threads 00:40:37.820 00:40:37.820 filename0: (groupid=0, jobs=1): err= 0: pid=3147285: Wed Nov 20 06:50:56 2024 00:40:37.820 read: IOPS=729, BW=2919KiB/s (2989kB/s)(28.5MiB/10003msec) 00:40:37.820 slat (nsec): min=5503, max=81431, avg=10141.86, stdev=8017.29 00:40:37.820 clat (usec): min=4364, max=40611, avg=21852.77, stdev=4111.16 00:40:37.820 lat (usec): min=4372, max=40620, avg=21862.91, stdev=4111.29 00:40:37.820 clat percentiles (usec): 00:40:37.820 | 1.00th=[ 5669], 5.00th=[12649], 10.00th=[16319], 20.00th=[21103], 00:40:37.820 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:40:37.820 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[24773], 00:40:37.820 | 99.00th=[28705], 99.50th=[34341], 99.90th=[40109], 99.95th=[40633], 00:40:37.820 | 99.99th=[40633] 00:40:37.820 bw ( KiB/s): min= 2688, max= 3472, per=4.40%, avg=2931.79, stdev=231.04, samples=19 00:40:37.820 iops : min= 672, max= 868, avg=732.95, stdev=57.76, samples=19 00:40:37.820 lat (msec) : 10=2.90%, 20=13.08%, 50=84.01% 00:40:37.820 cpu : usr=98.88%, sys=0.80%, ctx=16, majf=0, minf=9 00:40:37.820 IO depths : 1=4.1%, 2=8.3%, 4=18.3%, 8=60.7%, 16=8.6%, 32=0.0%, >=64=0.0% 00:40:37.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.820 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.820 issued rwts: total=7299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.820 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.820 filename0: (groupid=0, jobs=1): err= 0: pid=3147286: Wed Nov 20 06:50:56 2024 00:40:37.820 read: IOPS=689, BW=2757KiB/s (2823kB/s)(27.0MiB/10016msec) 00:40:37.820 slat (usec): min=5, max=124, avg=20.65, stdev=15.63 00:40:37.820 clat (usec): min=5277, max=33989, avg=23053.46, stdev=1871.84 00:40:37.820 lat (usec): min=5283, max=33995, avg=23074.12, stdev=1872.64 00:40:37.820 clat percentiles (usec): 00:40:37.820 | 1.00th=[13829], 5.00th=[21103], 10.00th=[22152], 20.00th=[22676], 00:40:37.820 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:40:37.820 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[24773], 00:40:37.820 | 99.00th=[26870], 99.50th=[27919], 99.90th=[30278], 99.95th=[33817], 00:40:37.820 | 99.99th=[33817] 00:40:37.820 bw ( KiB/s): min= 2688, max= 2992, per=4.14%, avg=2755.45, stdev=98.68, samples=20 00:40:37.820 iops : min= 672, max= 748, avg=688.85, stdev=24.68, samples=20 00:40:37.820 lat (msec) : 10=0.25%, 20=3.62%, 50=96.13% 00:40:37.820 cpu : usr=98.97%, sys=0.72%, ctx=19, majf=0, minf=9 00:40:37.820 IO depths : 1=5.1%, 2=10.2%, 4=21.6%, 8=55.6%, 16=7.5%, 32=0.0%, >=64=0.0% 00:40:37.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.820 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.820 issued rwts: total=6904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.820 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.820 filename0: (groupid=0, jobs=1): err= 0: pid=3147287: Wed Nov 20 06:50:56 2024 00:40:37.820 read: IOPS=690, BW=2761KiB/s (2827kB/s)(27.0MiB/10004msec) 00:40:37.820 slat (usec): min=5, max=197, avg=23.02, stdev=26.99 00:40:37.820 clat (usec): min=3449, max=48065, avg=23065.00, stdev=3181.47 00:40:37.820 lat (usec): min=3455, max=48087, avg=23088.02, stdev=3182.13 00:40:37.820 clat percentiles (usec): 00:40:37.820 | 1.00th=[11338], 5.00th=[17957], 10.00th=[21627], 20.00th=[22414], 00:40:37.820 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:40:37.820 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25560], 00:40:37.820 | 99.00th=[35390], 99.50th=[38011], 99.90th=[47973], 99.95th=[47973], 00:40:37.820 | 99.99th=[47973] 00:40:37.820 bw ( KiB/s): min= 2576, max= 2912, per=4.13%, avg=2749.58, stdev=73.34, samples=19 00:40:37.820 iops : min= 644, max= 728, avg=687.37, stdev=18.36, samples=19 00:40:37.820 lat (msec) : 4=0.09%, 10=0.59%, 20=6.01%, 50=93.31% 00:40:37.820 cpu : usr=98.91%, sys=0.75%, ctx=35, majf=0, minf=9 00:40:37.820 IO depths : 1=0.1%, 2=0.9%, 4=5.2%, 8=77.1%, 16=16.6%, 32=0.0%, >=64=0.0% 00:40:37.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.820 complete : 0=0.0%, 4=90.2%, 8=8.1%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.820 issued rwts: total=6905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.820 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.820 filename0: (groupid=0, jobs=1): err= 0: pid=3147288: Wed Nov 20 06:50:56 2024 00:40:37.820 read: IOPS=696, BW=2786KiB/s (2853kB/s)(27.2MiB/10007msec) 00:40:37.820 slat (usec): min=5, max=221, avg=25.75, stdev=26.60 00:40:37.820 clat (usec): min=6180, max=43943, avg=22776.76, stdev=4304.77 00:40:37.820 lat (usec): min=6189, max=43951, avg=22802.51, stdev=4306.99 00:40:37.820 clat percentiles (usec): 00:40:37.820 | 1.00th=[10159], 5.00th=[14353], 10.00th=[17695], 20.00th=[21627], 00:40:37.820 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:40:37.820 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25297], 95.00th=[30278], 00:40:37.820 | 99.00th=[36963], 99.50th=[39060], 99.90th=[42206], 99.95th=[43779], 00:40:37.820 | 99.99th=[43779] 00:40:37.820 bw ( KiB/s): min= 2560, max= 3136, per=4.16%, avg=2771.89, stdev=136.78, samples=19 00:40:37.820 iops : min= 640, max= 784, avg=692.95, stdev=34.18, samples=19 00:40:37.820 lat (msec) : 10=0.85%, 20=13.32%, 50=85.84% 00:40:37.820 cpu : usr=99.02%, sys=0.66%, ctx=17, majf=0, minf=9 00:40:37.820 IO depths : 1=2.6%, 2=5.3%, 4=13.4%, 8=67.7%, 16=11.0%, 32=0.0%, >=64=0.0% 00:40:37.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 complete : 0=0.0%, 4=91.2%, 8=4.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 issued rwts: total=6969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.821 filename0: (groupid=0, jobs=1): err= 0: pid=3147289: Wed Nov 20 06:50:56 2024 00:40:37.821 read: IOPS=703, BW=2812KiB/s (2880kB/s)(27.5MiB/10005msec) 00:40:37.821 slat (usec): min=5, max=218, avg=30.34, stdev=27.35 00:40:37.821 clat (usec): min=10490, max=38086, avg=22468.84, stdev=2830.01 00:40:37.821 lat (usec): min=10501, max=38120, avg=22499.18, stdev=2834.65 00:40:37.821 clat percentiles (usec): 00:40:37.821 | 1.00th=[13566], 5.00th=[16450], 10.00th=[18744], 20.00th=[22152], 00:40:37.821 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[22938], 00:40:37.821 | 70.00th=[23200], 80.00th=[23725], 90.00th=[24249], 95.00th=[25035], 00:40:37.821 | 99.00th=[31589], 99.50th=[33424], 99.90th=[36439], 99.95th=[36963], 00:40:37.821 | 99.99th=[38011] 00:40:37.821 bw ( KiB/s): min= 2560, max= 3152, per=4.22%, avg=2810.32, stdev=139.17, samples=19 00:40:37.821 iops : min= 640, max= 788, avg=702.53, stdev=34.82, samples=19 00:40:37.821 lat (msec) : 20=12.52%, 50=87.48% 00:40:37.821 cpu : usr=98.97%, sys=0.71%, ctx=15, majf=0, minf=9 00:40:37.821 IO depths : 1=4.4%, 2=8.8%, 4=19.1%, 8=59.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:40:37.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 complete : 0=0.0%, 4=92.5%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 issued rwts: total=7034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.821 filename0: (groupid=0, jobs=1): err= 0: pid=3147290: Wed Nov 20 06:50:56 2024 00:40:37.821 read: IOPS=729, BW=2918KiB/s (2988kB/s)(28.5MiB/10003msec) 00:40:37.821 slat (usec): min=5, max=215, avg=19.70, stdev=21.70 00:40:37.821 clat (usec): min=3474, max=44457, avg=21772.57, stdev=4263.79 00:40:37.821 lat (usec): min=3485, max=44466, avg=21792.27, stdev=4266.53 00:40:37.821 clat percentiles (usec): 00:40:37.821 | 1.00th=[ 9503], 5.00th=[13829], 10.00th=[15533], 20.00th=[19268], 00:40:37.821 | 30.00th=[22152], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:40:37.821 | 70.00th=[23200], 80.00th=[23725], 90.00th=[24249], 95.00th=[25035], 00:40:37.821 | 99.00th=[38011], 99.50th=[39060], 99.90th=[43254], 99.95th=[44303], 00:40:37.821 | 99.99th=[44303] 00:40:37.821 bw ( KiB/s): min= 2688, max= 4016, per=4.40%, avg=2930.95, stdev=308.11, samples=19 00:40:37.821 iops : min= 672, max= 1004, avg=732.74, stdev=77.03, samples=19 00:40:37.821 lat (msec) : 4=0.10%, 10=1.15%, 20=20.68%, 50=78.07% 00:40:37.821 cpu : usr=98.85%, sys=0.80%, ctx=50, majf=0, minf=9 00:40:37.821 IO depths : 1=3.7%, 2=7.6%, 4=17.7%, 8=62.1%, 16=9.0%, 32=0.0%, >=64=0.0% 00:40:37.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 issued rwts: total=7297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.821 filename0: (groupid=0, jobs=1): err= 0: pid=3147291: Wed Nov 20 06:50:56 2024 00:40:37.821 read: IOPS=693, BW=2776KiB/s (2842kB/s)(27.1MiB/10012msec) 00:40:37.821 slat (usec): min=5, max=132, avg=14.95, stdev=16.29 00:40:37.821 clat (usec): min=5690, max=37014, avg=22944.75, stdev=2381.91 00:40:37.821 lat (usec): min=5699, max=37020, avg=22959.70, stdev=2381.55 00:40:37.821 clat percentiles (usec): 00:40:37.821 | 1.00th=[12911], 5.00th=[19006], 10.00th=[21890], 20.00th=[22676], 00:40:37.821 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:40:37.821 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:40:37.821 | 99.00th=[28181], 99.50th=[31065], 99.90th=[36963], 99.95th=[36963], 00:40:37.821 | 99.99th=[36963] 00:40:37.821 bw ( KiB/s): min= 2672, max= 3040, per=4.14%, avg=2753.89, stdev=97.69, samples=19 00:40:37.821 iops : min= 668, max= 760, avg=688.42, stdev=24.46, samples=19 00:40:37.821 lat (msec) : 10=0.53%, 20=5.48%, 50=93.98% 00:40:37.821 cpu : usr=99.02%, sys=0.66%, ctx=13, majf=0, minf=9 00:40:37.821 IO depths : 1=2.2%, 2=6.9%, 4=20.1%, 8=60.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:40:37.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 complete : 0=0.0%, 4=93.0%, 8=1.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 issued rwts: total=6948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.821 filename0: (groupid=0, jobs=1): err= 0: pid=3147292: Wed Nov 20 06:50:56 2024 00:40:37.821 read: IOPS=688, BW=2756KiB/s (2822kB/s)(26.9MiB/10009msec) 00:40:37.821 slat (usec): min=5, max=164, avg=29.64, stdev=23.24 00:40:37.821 clat (usec): min=6104, max=43048, avg=22963.26, stdev=3009.97 00:40:37.821 lat (usec): min=6112, max=43056, avg=22992.90, stdev=3011.12 00:40:37.821 clat percentiles (usec): 00:40:37.821 | 1.00th=[11207], 5.00th=[17957], 10.00th=[21890], 20.00th=[22414], 00:40:37.821 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:40:37.821 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25035], 00:40:37.821 | 99.00th=[32375], 99.50th=[40109], 99.90th=[42730], 99.95th=[43254], 00:40:37.821 | 99.99th=[43254] 00:40:37.821 bw ( KiB/s): min= 2560, max= 3104, per=4.13%, avg=2748.32, stdev=114.52, samples=19 00:40:37.821 iops : min= 640, max= 776, avg=687.05, stdev=28.65, samples=19 00:40:37.821 lat (msec) : 10=0.62%, 20=5.58%, 50=93.79% 00:40:37.821 cpu : usr=98.88%, sys=0.76%, ctx=70, majf=0, minf=9 00:40:37.821 IO depths : 1=4.2%, 2=9.0%, 4=19.4%, 8=58.3%, 16=9.0%, 32=0.0%, >=64=0.0% 00:40:37.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 complete : 0=0.0%, 4=92.7%, 8=2.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 issued rwts: total=6896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.821 filename1: (groupid=0, jobs=1): err= 0: pid=3147293: Wed Nov 20 06:50:56 2024 00:40:37.821 read: IOPS=689, BW=2759KiB/s (2825kB/s)(27.0MiB/10014msec) 00:40:37.821 slat (usec): min=5, max=115, avg=18.25, stdev=14.41 00:40:37.821 clat (usec): min=5900, max=41948, avg=23053.54, stdev=2108.20 00:40:37.821 lat (usec): min=5906, max=41957, avg=23071.79, stdev=2108.35 00:40:37.821 clat percentiles (usec): 00:40:37.821 | 1.00th=[12256], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:40:37.821 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:40:37.821 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[24511], 00:40:37.821 | 99.00th=[25297], 99.50th=[26608], 99.90th=[41681], 99.95th=[41681], 00:40:37.821 | 99.99th=[42206] 00:40:37.821 bw ( KiB/s): min= 2560, max= 2944, per=4.14%, avg=2758.95, stdev=94.03, samples=19 00:40:37.821 iops : min= 640, max= 736, avg=689.68, stdev=23.52, samples=19 00:40:37.821 lat (msec) : 10=0.55%, 20=2.61%, 50=96.84% 00:40:37.821 cpu : usr=98.78%, sys=0.89%, ctx=18, majf=0, minf=9 00:40:37.821 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:37.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 issued rwts: total=6906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.821 filename1: (groupid=0, jobs=1): err= 0: pid=3147294: Wed Nov 20 06:50:56 2024 00:40:37.821 read: IOPS=709, BW=2837KiB/s (2905kB/s)(27.8MiB/10022msec) 00:40:37.821 slat (usec): min=5, max=164, avg=19.70, stdev=19.95 00:40:37.821 clat (usec): min=3609, max=43624, avg=22368.75, stdev=4275.05 00:40:37.821 lat (usec): min=3615, max=43634, avg=22388.45, stdev=4277.33 00:40:37.821 clat percentiles (usec): 00:40:37.821 | 1.00th=[ 6325], 5.00th=[13435], 10.00th=[18220], 20.00th=[22152], 00:40:37.821 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:40:37.821 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25560], 00:40:37.821 | 99.00th=[38011], 99.50th=[40633], 99.90th=[42730], 99.95th=[43779], 00:40:37.821 | 99.99th=[43779] 00:40:37.821 bw ( KiB/s): min= 2640, max= 3376, per=4.27%, avg=2841.60, stdev=193.84, samples=20 00:40:37.821 iops : min= 660, max= 844, avg=710.40, stdev=48.46, samples=20 00:40:37.821 lat (msec) : 4=0.14%, 10=2.22%, 20=10.61%, 50=87.03% 00:40:37.821 cpu : usr=98.90%, sys=0.76%, ctx=19, majf=0, minf=9 00:40:37.821 IO depths : 1=3.6%, 2=7.9%, 4=20.8%, 8=58.6%, 16=9.1%, 32=0.0%, >=64=0.0% 00:40:37.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 issued rwts: total=7108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.821 filename1: (groupid=0, jobs=1): err= 0: pid=3147295: Wed Nov 20 06:50:56 2024 00:40:37.821 read: IOPS=694, BW=2777KiB/s (2844kB/s)(27.1MiB/10011msec) 00:40:37.821 slat (usec): min=5, max=123, avg=27.31, stdev=19.95 00:40:37.821 clat (usec): min=8193, max=40044, avg=22807.00, stdev=2851.79 00:40:37.821 lat (usec): min=8205, max=40104, avg=22834.31, stdev=2854.19 00:40:37.821 clat percentiles (usec): 00:40:37.821 | 1.00th=[13698], 5.00th=[16581], 10.00th=[20055], 20.00th=[22414], 00:40:37.821 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:40:37.821 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25560], 00:40:37.821 | 99.00th=[33424], 99.50th=[34341], 99.90th=[39584], 99.95th=[40109], 00:40:37.821 | 99.99th=[40109] 00:40:37.821 bw ( KiB/s): min= 2656, max= 3232, per=4.16%, avg=2771.89, stdev=143.39, samples=19 00:40:37.821 iops : min= 664, max= 808, avg=692.95, stdev=35.84, samples=19 00:40:37.821 lat (msec) : 10=0.06%, 20=9.67%, 50=90.27% 00:40:37.821 cpu : usr=98.92%, sys=0.73%, ctx=26, majf=0, minf=9 00:40:37.821 IO depths : 1=4.6%, 2=9.2%, 4=20.0%, 8=58.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:40:37.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 complete : 0=0.0%, 4=92.7%, 8=1.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.821 issued rwts: total=6950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.821 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.821 filename1: (groupid=0, jobs=1): err= 0: pid=3147296: Wed Nov 20 06:50:56 2024 00:40:37.821 read: IOPS=705, BW=2824KiB/s (2891kB/s)(27.6MiB/10004msec) 00:40:37.821 slat (usec): min=5, max=147, avg=22.89, stdev=20.50 00:40:37.821 clat (usec): min=3205, max=40733, avg=22503.40, stdev=3748.40 00:40:37.821 lat (usec): min=3213, max=40787, avg=22526.29, stdev=3750.69 00:40:37.821 clat percentiles (usec): 00:40:37.821 | 1.00th=[12256], 5.00th=[15533], 10.00th=[17171], 20.00th=[20841], 00:40:37.822 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:40:37.822 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25297], 95.00th=[28443], 00:40:37.822 | 99.00th=[33162], 99.50th=[36439], 99.90th=[40633], 99.95th=[40633], 00:40:37.822 | 99.99th=[40633] 00:40:37.822 bw ( KiB/s): min= 2688, max= 3056, per=4.22%, avg=2809.79, stdev=94.47, samples=19 00:40:37.822 iops : min= 672, max= 764, avg=702.42, stdev=23.65, samples=19 00:40:37.822 lat (msec) : 4=0.08%, 10=0.31%, 20=17.42%, 50=82.19% 00:40:37.822 cpu : usr=98.82%, sys=0.78%, ctx=36, majf=0, minf=10 00:40:37.822 IO depths : 1=2.0%, 2=4.0%, 4=10.2%, 8=71.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:40:37.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 complete : 0=0.0%, 4=90.4%, 8=5.9%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 issued rwts: total=7062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.822 filename1: (groupid=0, jobs=1): err= 0: pid=3147297: Wed Nov 20 06:50:56 2024 00:40:37.822 read: IOPS=685, BW=2742KiB/s (2808kB/s)(26.8MiB/10004msec) 00:40:37.822 slat (usec): min=5, max=132, avg=30.52, stdev=20.00 00:40:37.822 clat (usec): min=6692, max=41422, avg=23056.28, stdev=2169.68 00:40:37.822 lat (usec): min=6701, max=41440, avg=23086.79, stdev=2170.73 00:40:37.822 clat percentiles (usec): 00:40:37.822 | 1.00th=[12518], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:40:37.822 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:40:37.822 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[24773], 00:40:37.822 | 99.00th=[28967], 99.50th=[31327], 99.90th=[41157], 99.95th=[41157], 00:40:37.822 | 99.99th=[41681] 00:40:37.822 bw ( KiB/s): min= 2613, max= 2864, per=4.10%, avg=2732.58, stdev=68.55, samples=19 00:40:37.822 iops : min= 653, max= 716, avg=683.11, stdev=17.16, samples=19 00:40:37.822 lat (msec) : 10=0.32%, 20=2.61%, 50=97.07% 00:40:37.822 cpu : usr=99.05%, sys=0.61%, ctx=15, majf=0, minf=9 00:40:37.822 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:40:37.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 issued rwts: total=6858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.822 filename1: (groupid=0, jobs=1): err= 0: pid=3147298: Wed Nov 20 06:50:56 2024 00:40:37.822 read: IOPS=685, BW=2740KiB/s (2806kB/s)(26.8MiB/10016msec) 00:40:37.822 slat (usec): min=5, max=117, avg=26.94, stdev=17.84 00:40:37.822 clat (usec): min=9025, max=36478, avg=23132.15, stdev=2077.03 00:40:37.822 lat (usec): min=9040, max=36487, avg=23159.09, stdev=2077.11 00:40:37.822 clat percentiles (usec): 00:40:37.822 | 1.00th=[14615], 5.00th=[21103], 10.00th=[22152], 20.00th=[22676], 00:40:37.822 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:40:37.822 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[24773], 00:40:37.822 | 99.00th=[31589], 99.50th=[33424], 99.90th=[36439], 99.95th=[36439], 00:40:37.822 | 99.99th=[36439] 00:40:37.822 bw ( KiB/s): min= 2560, max= 2965, per=4.12%, avg=2740.68, stdev=90.82, samples=19 00:40:37.822 iops : min= 640, max= 741, avg=685.11, stdev=22.71, samples=19 00:40:37.822 lat (msec) : 10=0.15%, 20=4.08%, 50=95.77% 00:40:37.822 cpu : usr=98.84%, sys=0.83%, ctx=17, majf=0, minf=9 00:40:37.822 IO depths : 1=5.3%, 2=10.6%, 4=22.2%, 8=54.6%, 16=7.4%, 32=0.0%, >=64=0.0% 00:40:37.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 issued rwts: total=6862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.822 filename1: (groupid=0, jobs=1): err= 0: pid=3147299: Wed Nov 20 06:50:56 2024 00:40:37.822 read: IOPS=686, BW=2746KiB/s (2812kB/s)(26.8MiB/10005msec) 00:40:37.822 slat (usec): min=5, max=160, avg=34.23, stdev=23.43 00:40:37.822 clat (usec): min=8592, max=37124, avg=23022.81, stdev=1733.35 00:40:37.822 lat (usec): min=8602, max=37133, avg=23057.04, stdev=1734.00 00:40:37.822 clat percentiles (usec): 00:40:37.822 | 1.00th=[15795], 5.00th=[21890], 10.00th=[22152], 20.00th=[22676], 00:40:37.822 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:40:37.822 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[24511], 00:40:37.822 | 99.00th=[26608], 99.50th=[30540], 99.90th=[36963], 99.95th=[36963], 00:40:37.822 | 99.99th=[36963] 00:40:37.822 bw ( KiB/s): min= 2682, max= 3056, per=4.13%, avg=2748.84, stdev=94.40, samples=19 00:40:37.822 iops : min= 670, max= 764, avg=687.16, stdev=23.60, samples=19 00:40:37.822 lat (msec) : 10=0.09%, 20=3.38%, 50=96.53% 00:40:37.822 cpu : usr=99.07%, sys=0.60%, ctx=13, majf=0, minf=9 00:40:37.822 IO depths : 1=5.6%, 2=11.4%, 4=23.8%, 8=52.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:40:37.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 issued rwts: total=6868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.822 filename1: (groupid=0, jobs=1): err= 0: pid=3147300: Wed Nov 20 06:50:56 2024 00:40:37.822 read: IOPS=688, BW=2756KiB/s (2822kB/s)(26.9MiB/10006msec) 00:40:37.822 slat (usec): min=5, max=116, avg=27.94, stdev=18.22 00:40:37.822 clat (usec): min=8002, max=46383, avg=22985.13, stdev=2874.44 00:40:37.822 lat (usec): min=8011, max=46395, avg=23013.07, stdev=2876.71 00:40:37.822 clat percentiles (usec): 00:40:37.822 | 1.00th=[12780], 5.00th=[17957], 10.00th=[21890], 20.00th=[22414], 00:40:37.822 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:40:37.822 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24511], 95.00th=[25297], 00:40:37.822 | 99.00th=[33817], 99.50th=[35914], 99.90th=[42730], 99.95th=[46400], 00:40:37.822 | 99.99th=[46400] 00:40:37.822 bw ( KiB/s): min= 2395, max= 3040, per=4.13%, avg=2749.74, stdev=135.70, samples=19 00:40:37.822 iops : min= 598, max= 760, avg=687.37, stdev=34.01, samples=19 00:40:37.822 lat (msec) : 10=0.09%, 20=7.59%, 50=92.33% 00:40:37.822 cpu : usr=98.99%, sys=0.66%, ctx=14, majf=0, minf=9 00:40:37.822 IO depths : 1=4.4%, 2=9.2%, 4=20.8%, 8=57.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:40:37.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 complete : 0=0.0%, 4=93.0%, 8=1.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 issued rwts: total=6894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.822 filename2: (groupid=0, jobs=1): err= 0: pid=3147301: Wed Nov 20 06:50:56 2024 00:40:37.822 read: IOPS=691, BW=2767KiB/s (2834kB/s)(27.2MiB/10049msec) 00:40:37.822 slat (usec): min=5, max=137, avg=24.41, stdev=19.62 00:40:37.822 clat (usec): min=9033, max=59099, avg=22908.31, stdev=3034.36 00:40:37.822 lat (usec): min=9039, max=59113, avg=22932.71, stdev=3035.68 00:40:37.822 clat percentiles (usec): 00:40:37.822 | 1.00th=[13566], 5.00th=[16909], 10.00th=[20841], 20.00th=[22414], 00:40:37.822 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:40:37.822 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25560], 00:40:37.822 | 99.00th=[32113], 99.50th=[36963], 99.90th=[58983], 99.95th=[58983], 00:40:37.822 | 99.99th=[58983] 00:40:37.822 bw ( KiB/s): min= 2608, max= 3152, per=4.16%, avg=2771.15, stdev=129.42, samples=20 00:40:37.822 iops : min= 652, max= 788, avg=692.70, stdev=32.41, samples=20 00:40:37.822 lat (msec) : 10=0.06%, 20=8.93%, 50=90.84%, 100=0.17% 00:40:37.822 cpu : usr=98.88%, sys=0.80%, ctx=15, majf=0, minf=9 00:40:37.822 IO depths : 1=4.6%, 2=9.2%, 4=19.9%, 8=58.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:40:37.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 issued rwts: total=6952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.822 filename2: (groupid=0, jobs=1): err= 0: pid=3147302: Wed Nov 20 06:50:56 2024 00:40:37.822 read: IOPS=693, BW=2773KiB/s (2839kB/s)(27.1MiB/10003msec) 00:40:37.822 slat (usec): min=5, max=122, avg=24.00, stdev=20.74 00:40:37.822 clat (usec): min=4491, max=47659, avg=22888.32, stdev=3999.14 00:40:37.822 lat (usec): min=4497, max=47683, avg=22912.32, stdev=4001.22 00:40:37.822 clat percentiles (usec): 00:40:37.822 | 1.00th=[11731], 5.00th=[15664], 10.00th=[17957], 20.00th=[22152], 00:40:37.822 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:40:37.822 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25822], 95.00th=[29492], 00:40:37.822 | 99.00th=[36963], 99.50th=[39584], 99.90th=[47449], 99.95th=[47449], 00:40:37.822 | 99.99th=[47449] 00:40:37.822 bw ( KiB/s): min= 2544, max= 2992, per=4.15%, avg=2764.32, stdev=120.81, samples=19 00:40:37.822 iops : min= 636, max= 748, avg=691.05, stdev=30.22, samples=19 00:40:37.822 lat (msec) : 10=0.37%, 20=14.12%, 50=85.51% 00:40:37.822 cpu : usr=98.98%, sys=0.69%, ctx=13, majf=0, minf=9 00:40:37.822 IO depths : 1=2.5%, 2=5.3%, 4=13.6%, 8=67.3%, 16=11.3%, 32=0.0%, >=64=0.0% 00:40:37.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 complete : 0=0.0%, 4=91.3%, 8=4.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.822 issued rwts: total=6934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.822 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.822 filename2: (groupid=0, jobs=1): err= 0: pid=3147303: Wed Nov 20 06:50:56 2024 00:40:37.822 read: IOPS=716, BW=2868KiB/s (2937kB/s)(28.0MiB/10012msec) 00:40:37.822 slat (usec): min=5, max=121, avg=17.70, stdev=15.34 00:40:37.822 clat (usec): min=4778, max=41318, avg=22166.05, stdev=3755.11 00:40:37.822 lat (usec): min=4786, max=41353, avg=22183.75, stdev=3757.15 00:40:37.822 clat percentiles (usec): 00:40:37.822 | 1.00th=[11076], 5.00th=[14353], 10.00th=[16712], 20.00th=[21365], 00:40:37.822 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:40:37.822 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25297], 00:40:37.822 | 99.00th=[34866], 99.50th=[37487], 99.90th=[41157], 99.95th=[41157], 00:40:37.822 | 99.99th=[41157] 00:40:37.822 bw ( KiB/s): min= 2682, max= 3344, per=4.31%, avg=2870.42, stdev=181.44, samples=19 00:40:37.822 iops : min= 670, max= 836, avg=717.58, stdev=45.39, samples=19 00:40:37.822 lat (msec) : 10=0.75%, 20=16.69%, 50=82.56% 00:40:37.822 cpu : usr=98.66%, sys=0.95%, ctx=38, majf=0, minf=9 00:40:37.822 IO depths : 1=3.9%, 2=7.9%, 4=18.3%, 8=61.1%, 16=8.8%, 32=0.0%, >=64=0.0% 00:40:37.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 issued rwts: total=7178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.823 filename2: (groupid=0, jobs=1): err= 0: pid=3147304: Wed Nov 20 06:50:56 2024 00:40:37.823 read: IOPS=682, BW=2730KiB/s (2796kB/s)(26.7MiB/10004msec) 00:40:37.823 slat (usec): min=5, max=119, avg=21.07, stdev=18.49 00:40:37.823 clat (usec): min=4073, max=52313, avg=23288.69, stdev=3633.71 00:40:37.823 lat (usec): min=4082, max=52330, avg=23309.76, stdev=3633.88 00:40:37.823 clat percentiles (usec): 00:40:37.823 | 1.00th=[11076], 5.00th=[17695], 10.00th=[20579], 20.00th=[22414], 00:40:37.823 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:40:37.823 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25822], 95.00th=[29492], 00:40:37.823 | 99.00th=[36439], 99.50th=[39584], 99.90th=[42206], 99.95th=[52167], 00:40:37.823 | 99.99th=[52167] 00:40:37.823 bw ( KiB/s): min= 2525, max= 2848, per=4.09%, avg=2721.63, stdev=86.92, samples=19 00:40:37.823 iops : min= 631, max= 712, avg=680.37, stdev=21.77, samples=19 00:40:37.823 lat (msec) : 10=0.63%, 20=7.94%, 50=91.36%, 100=0.07% 00:40:37.823 cpu : usr=99.07%, sys=0.60%, ctx=15, majf=0, minf=9 00:40:37.823 IO depths : 1=2.1%, 2=4.2%, 4=10.8%, 8=70.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:40:37.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 complete : 0=0.0%, 4=90.3%, 8=6.1%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 issued rwts: total=6828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.823 filename2: (groupid=0, jobs=1): err= 0: pid=3147305: Wed Nov 20 06:50:56 2024 00:40:37.823 read: IOPS=688, BW=2754KiB/s (2820kB/s)(26.9MiB/10003msec) 00:40:37.823 slat (usec): min=5, max=140, avg=26.00, stdev=18.70 00:40:37.823 clat (usec): min=5607, max=61206, avg=23022.33, stdev=3296.38 00:40:37.823 lat (usec): min=5615, max=61231, avg=23048.33, stdev=3297.59 00:40:37.823 clat percentiles (usec): 00:40:37.823 | 1.00th=[13042], 5.00th=[16909], 10.00th=[21365], 20.00th=[22414], 00:40:37.823 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:40:37.823 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24511], 95.00th=[26608], 00:40:37.823 | 99.00th=[33424], 99.50th=[36963], 99.90th=[47449], 99.95th=[61080], 00:40:37.823 | 99.99th=[61080] 00:40:37.823 bw ( KiB/s): min= 2400, max= 2928, per=4.11%, avg=2735.68, stdev=115.59, samples=19 00:40:37.823 iops : min= 600, max= 732, avg=683.89, stdev=28.90, samples=19 00:40:37.823 lat (msec) : 10=0.23%, 20=7.91%, 50=91.78%, 100=0.07% 00:40:37.823 cpu : usr=99.09%, sys=0.59%, ctx=14, majf=0, minf=9 00:40:37.823 IO depths : 1=3.9%, 2=8.1%, 4=18.4%, 8=60.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:40:37.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 complete : 0=0.0%, 4=92.4%, 8=2.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 issued rwts: total=6886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.823 filename2: (groupid=0, jobs=1): err= 0: pid=3147306: Wed Nov 20 06:50:56 2024 00:40:37.823 read: IOPS=697, BW=2791KiB/s (2858kB/s)(27.3MiB/10022msec) 00:40:37.823 slat (usec): min=5, max=130, avg=16.79, stdev=15.37 00:40:37.823 clat (usec): min=7390, max=42694, avg=22760.64, stdev=3192.53 00:40:37.823 lat (usec): min=7396, max=42704, avg=22777.43, stdev=3194.10 00:40:37.823 clat percentiles (usec): 00:40:37.823 | 1.00th=[10683], 5.00th=[15926], 10.00th=[19792], 20.00th=[22414], 00:40:37.823 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:40:37.823 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25560], 00:40:37.823 | 99.00th=[31327], 99.50th=[35390], 99.90th=[39584], 99.95th=[42730], 00:40:37.823 | 99.99th=[42730] 00:40:37.823 bw ( KiB/s): min= 2640, max= 3168, per=4.20%, avg=2796.50, stdev=144.44, samples=20 00:40:37.823 iops : min= 660, max= 792, avg=699.10, stdev=36.13, samples=20 00:40:37.823 lat (msec) : 10=0.72%, 20=9.57%, 50=89.72% 00:40:37.823 cpu : usr=98.91%, sys=0.76%, ctx=15, majf=0, minf=9 00:40:37.823 IO depths : 1=4.1%, 2=8.2%, 4=18.7%, 8=60.3%, 16=8.7%, 32=0.0%, >=64=0.0% 00:40:37.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 issued rwts: total=6993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.823 filename2: (groupid=0, jobs=1): err= 0: pid=3147307: Wed Nov 20 06:50:56 2024 00:40:37.823 read: IOPS=681, BW=2724KiB/s (2790kB/s)(26.7MiB/10049msec) 00:40:37.823 slat (usec): min=5, max=113, avg=20.48, stdev=18.59 00:40:37.823 clat (usec): min=10874, max=52535, avg=23258.33, stdev=1767.05 00:40:37.823 lat (usec): min=10886, max=52559, avg=23278.81, stdev=1765.85 00:40:37.823 clat percentiles (usec): 00:40:37.823 | 1.00th=[18482], 5.00th=[22152], 10.00th=[22414], 20.00th=[22676], 00:40:37.823 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:40:37.823 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:40:37.823 | 99.00th=[26608], 99.50th=[32637], 99.90th=[38536], 99.95th=[52691], 00:40:37.823 | 99.99th=[52691] 00:40:37.823 bw ( KiB/s): min= 2560, max= 2864, per=4.11%, avg=2734.90, stdev=79.67, samples=20 00:40:37.823 iops : min= 640, max= 716, avg=683.70, stdev=19.88, samples=20 00:40:37.823 lat (msec) : 20=1.43%, 50=98.48%, 100=0.09% 00:40:37.823 cpu : usr=99.06%, sys=0.61%, ctx=14, majf=0, minf=9 00:40:37.823 IO depths : 1=5.9%, 2=11.8%, 4=24.1%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:40:37.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 issued rwts: total=6844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.823 filename2: (groupid=0, jobs=1): err= 0: pid=3147308: Wed Nov 20 06:50:56 2024 00:40:37.823 read: IOPS=681, BW=2726KiB/s (2792kB/s)(26.7MiB/10042msec) 00:40:37.823 slat (nsec): min=5508, max=95744, avg=25261.36, stdev=16330.43 00:40:37.823 clat (usec): min=6696, max=49239, avg=23202.15, stdev=2216.84 00:40:37.823 lat (usec): min=6702, max=49247, avg=23227.41, stdev=2216.76 00:40:37.823 clat percentiles (usec): 00:40:37.823 | 1.00th=[13435], 5.00th=[22152], 10.00th=[22414], 20.00th=[22676], 00:40:37.823 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:40:37.823 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[24773], 00:40:37.823 | 99.00th=[30278], 99.50th=[36439], 99.90th=[40633], 99.95th=[49021], 00:40:37.823 | 99.99th=[49021] 00:40:37.823 bw ( KiB/s): min= 2608, max= 2896, per=4.11%, avg=2734.90, stdev=81.44, samples=20 00:40:37.823 iops : min= 652, max= 724, avg=683.70, stdev=20.32, samples=20 00:40:37.823 lat (msec) : 10=0.42%, 20=2.63%, 50=96.95% 00:40:37.823 cpu : usr=98.93%, sys=0.74%, ctx=15, majf=0, minf=9 00:40:37.823 IO depths : 1=5.5%, 2=11.1%, 4=22.8%, 8=53.5%, 16=7.1%, 32=0.0%, >=64=0.0% 00:40:37.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.823 issued rwts: total=6844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.823 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:37.823 00:40:37.823 Run status group 0 (all jobs): 00:40:37.823 READ: bw=65.0MiB/s (68.2MB/s), 2724KiB/s-2919KiB/s (2790kB/s-2989kB/s), io=653MiB (685MB), run=10003-10049msec 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.823 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.824 bdev_null0 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.824 06:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.824 [2024-11-20 06:50:57.013409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.824 bdev_null1 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:37.824 { 00:40:37.824 "params": { 00:40:37.824 "name": "Nvme$subsystem", 00:40:37.824 "trtype": "$TEST_TRANSPORT", 00:40:37.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:37.824 "adrfam": "ipv4", 00:40:37.824 "trsvcid": "$NVMF_PORT", 00:40:37.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:37.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:37.824 "hdgst": ${hdgst:-false}, 00:40:37.824 "ddgst": ${ddgst:-false} 00:40:37.824 }, 00:40:37.824 "method": "bdev_nvme_attach_controller" 00:40:37.824 } 00:40:37.824 EOF 00:40:37.824 )") 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:37.824 { 00:40:37.824 "params": { 00:40:37.824 "name": "Nvme$subsystem", 00:40:37.824 "trtype": "$TEST_TRANSPORT", 00:40:37.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:37.824 "adrfam": "ipv4", 00:40:37.824 "trsvcid": "$NVMF_PORT", 00:40:37.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:37.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:37.824 "hdgst": ${hdgst:-false}, 00:40:37.824 "ddgst": ${ddgst:-false} 00:40:37.824 }, 00:40:37.824 "method": "bdev_nvme_attach_controller" 00:40:37.824 } 00:40:37.824 EOF 00:40:37.824 )") 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:37.824 "params": { 00:40:37.824 "name": "Nvme0", 00:40:37.824 "trtype": "tcp", 00:40:37.824 "traddr": "10.0.0.2", 00:40:37.824 "adrfam": "ipv4", 00:40:37.824 "trsvcid": "4420", 00:40:37.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:37.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:37.824 "hdgst": false, 00:40:37.824 "ddgst": false 00:40:37.824 }, 00:40:37.824 "method": "bdev_nvme_attach_controller" 00:40:37.824 },{ 00:40:37.824 "params": { 00:40:37.824 "name": "Nvme1", 00:40:37.824 "trtype": "tcp", 00:40:37.824 "traddr": "10.0.0.2", 00:40:37.824 "adrfam": "ipv4", 00:40:37.824 "trsvcid": "4420", 00:40:37.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:37.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:37.824 "hdgst": false, 00:40:37.824 "ddgst": false 00:40:37.824 }, 00:40:37.824 "method": "bdev_nvme_attach_controller" 00:40:37.824 }' 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:37.824 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:37.825 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:37.825 06:50:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:37.825 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:37.825 ... 00:40:37.825 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:37.825 ... 00:40:37.825 fio-3.35 00:40:37.825 Starting 4 threads 00:40:44.407 00:40:44.407 filename0: (groupid=0, jobs=1): err= 0: pid=3149780: Wed Nov 20 06:51:03 2024 00:40:44.407 read: IOPS=2972, BW=23.2MiB/s (24.4MB/s)(116MiB/5003msec) 00:40:44.407 slat (nsec): min=5550, max=60918, avg=8561.75, stdev=3594.42 00:40:44.407 clat (usec): min=962, max=4323, avg=2667.82, stdev=153.94 00:40:44.407 lat (usec): min=988, max=4331, avg=2676.38, stdev=153.73 00:40:44.407 clat percentiles (usec): 00:40:44.407 | 1.00th=[ 2212], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2638], 00:40:44.407 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:40:44.407 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2868], 00:40:44.407 | 99.00th=[ 3163], 99.50th=[ 3326], 99.90th=[ 4047], 99.95th=[ 4228], 00:40:44.407 | 99.99th=[ 4293] 00:40:44.407 bw ( KiB/s): min=23616, max=23888, per=25.00%, avg=23779.56, stdev=96.26, samples=9 00:40:44.407 iops : min= 2952, max= 2986, avg=2972.44, stdev=12.03, samples=9 00:40:44.407 lat (usec) : 1000=0.01% 00:40:44.407 lat (msec) : 2=0.48%, 4=99.39%, 10=0.11% 00:40:44.407 cpu : usr=96.18%, sys=3.52%, ctx=10, majf=0, minf=36 00:40:44.407 IO depths : 1=0.1%, 2=0.1%, 4=71.3%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.407 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.407 issued rwts: total=14872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.407 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:44.407 filename0: (groupid=0, jobs=1): err= 0: pid=3149781: Wed Nov 20 06:51:03 2024 00:40:44.407 read: IOPS=2965, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:40:44.407 slat (nsec): min=5503, max=74722, avg=8853.69, stdev=3770.18 00:40:44.407 clat (usec): min=1403, max=5495, avg=2673.00, stdev=160.06 00:40:44.407 lat (usec): min=1409, max=5527, avg=2681.85, stdev=160.27 00:40:44.407 clat percentiles (usec): 00:40:44.407 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2638], 00:40:44.407 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:40:44.407 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2900], 00:40:44.407 | 99.00th=[ 3195], 99.50th=[ 3523], 99.90th=[ 4293], 99.95th=[ 5407], 00:40:44.407 | 99.99th=[ 5473] 00:40:44.407 bw ( KiB/s): min=23264, max=23856, per=24.93%, avg=23710.22, stdev=190.23, samples=9 00:40:44.407 iops : min= 2908, max= 2982, avg=2963.78, stdev=23.78, samples=9 00:40:44.407 lat (msec) : 2=0.26%, 4=99.56%, 10=0.18% 00:40:44.407 cpu : usr=97.04%, sys=2.68%, ctx=7, majf=0, minf=70 00:40:44.407 IO depths : 1=0.1%, 2=0.1%, 4=72.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.407 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.407 issued rwts: total=14832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.407 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:44.407 filename1: (groupid=0, jobs=1): err= 0: pid=3149782: Wed Nov 20 06:51:03 2024 00:40:44.407 read: IOPS=2972, BW=23.2MiB/s (24.4MB/s)(116MiB/5002msec) 00:40:44.407 slat (nsec): min=5501, max=95646, avg=8591.49, stdev=3840.22 00:40:44.407 clat (usec): min=1311, max=4838, avg=2669.65, stdev=138.96 00:40:44.407 lat (usec): min=1317, max=4844, avg=2678.24, stdev=139.13 00:40:44.407 clat percentiles (usec): 00:40:44.407 | 1.00th=[ 2245], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2638], 00:40:44.407 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:40:44.407 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2868], 00:40:44.407 | 99.00th=[ 3064], 99.50th=[ 3228], 99.90th=[ 3982], 99.95th=[ 4178], 00:40:44.407 | 99.99th=[ 4817] 00:40:44.407 bw ( KiB/s): min=23488, max=23984, per=25.00%, avg=23774.22, stdev=130.09, samples=9 00:40:44.407 iops : min= 2936, max= 2998, avg=2971.78, stdev=16.26, samples=9 00:40:44.407 lat (msec) : 2=0.34%, 4=99.58%, 10=0.07% 00:40:44.407 cpu : usr=96.74%, sys=2.96%, ctx=5, majf=0, minf=41 00:40:44.407 IO depths : 1=0.1%, 2=0.1%, 4=69.3%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.407 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.407 issued rwts: total=14870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.407 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:44.407 filename1: (groupid=0, jobs=1): err= 0: pid=3149783: Wed Nov 20 06:51:03 2024 00:40:44.407 read: IOPS=2980, BW=23.3MiB/s (24.4MB/s)(116MiB/5001msec) 00:40:44.407 slat (nsec): min=5498, max=97537, avg=8973.32, stdev=4086.48 00:40:44.407 clat (usec): min=1180, max=5525, avg=2661.31, stdev=213.80 00:40:44.407 lat (usec): min=1186, max=5555, avg=2670.28, stdev=214.03 00:40:44.407 clat percentiles (usec): 00:40:44.407 | 1.00th=[ 2057], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2638], 00:40:44.407 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:40:44.407 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2704], 95.00th=[ 2900], 00:40:44.407 | 99.00th=[ 3556], 99.50th=[ 3818], 99.90th=[ 4490], 99.95th=[ 5014], 00:40:44.407 | 99.99th=[ 5473] 00:40:44.407 bw ( KiB/s): min=23230, max=24368, per=25.08%, avg=23850.44, stdev=345.19, samples=9 00:40:44.407 iops : min= 2903, max= 3046, avg=2981.22, stdev=43.32, samples=9 00:40:44.407 lat (msec) : 2=0.73%, 4=98.97%, 10=0.30% 00:40:44.407 cpu : usr=96.94%, sys=2.76%, ctx=6, majf=0, minf=53 00:40:44.407 IO depths : 1=0.1%, 2=0.1%, 4=71.1%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.407 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.407 issued rwts: total=14904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.407 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:44.407 00:40:44.407 Run status group 0 (all jobs): 00:40:44.407 READ: bw=92.9MiB/s (97.4MB/s), 23.2MiB/s-23.3MiB/s (24.3MB/s-24.4MB/s), io=465MiB (487MB), run=5001-5003msec 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.407 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:44.408 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.408 06:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:44.408 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.408 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:44.408 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.408 00:40:44.408 real 0m24.883s 00:40:44.408 user 5m18.690s 00:40:44.408 sys 0m4.269s 00:40:44.408 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:44.408 06:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:44.408 ************************************ 00:40:44.408 END TEST fio_dif_rand_params 00:40:44.408 ************************************ 00:40:44.408 06:51:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:40:44.408 06:51:03 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:44.408 06:51:03 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:44.408 06:51:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:44.408 ************************************ 00:40:44.408 START TEST fio_dif_digest 00:40:44.408 ************************************ 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:44.408 bdev_null0 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:44.408 [2024-11-20 06:51:03.730264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:44.408 { 00:40:44.408 "params": { 00:40:44.408 "name": "Nvme$subsystem", 00:40:44.408 "trtype": "$TEST_TRANSPORT", 00:40:44.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:44.408 "adrfam": "ipv4", 00:40:44.408 "trsvcid": "$NVMF_PORT", 00:40:44.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:44.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:44.408 "hdgst": ${hdgst:-false}, 00:40:44.408 "ddgst": ${ddgst:-false} 00:40:44.408 }, 00:40:44.408 "method": "bdev_nvme_attach_controller" 00:40:44.408 } 00:40:44.408 EOF 00:40:44.408 )") 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:44.408 "params": { 00:40:44.408 "name": "Nvme0", 00:40:44.408 "trtype": "tcp", 00:40:44.408 "traddr": "10.0.0.2", 00:40:44.408 "adrfam": "ipv4", 00:40:44.408 "trsvcid": "4420", 00:40:44.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:44.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:44.408 "hdgst": true, 00:40:44.408 "ddgst": true 00:40:44.408 }, 00:40:44.408 "method": "bdev_nvme_attach_controller" 00:40:44.408 }' 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:44.408 06:51:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:44.408 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:44.408 ... 00:40:44.408 fio-3.35 00:40:44.408 Starting 3 threads 00:40:54.452 00:40:54.452 filename0: (groupid=0, jobs=1): err= 0: pid=3151135: Wed Nov 20 06:51:14 2024 00:40:54.452 read: IOPS=328, BW=41.0MiB/s (43.0MB/s)(411MiB/10006msec) 00:40:54.452 slat (nsec): min=5840, max=32626, avg=6586.37, stdev=1036.56 00:40:54.452 clat (usec): min=5570, max=12406, avg=9125.37, stdev=957.45 00:40:54.452 lat (usec): min=5577, max=12425, avg=9131.95, stdev=957.52 00:40:54.452 clat percentiles (usec): 00:40:54.452 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8291], 00:40:54.452 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:40:54.452 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:40:54.452 | 99.00th=[11338], 99.50th=[11600], 99.90th=[11863], 99.95th=[12256], 00:40:54.452 | 99.99th=[12387] 00:40:54.452 bw ( KiB/s): min=38656, max=46592, per=35.99%, avg=41903.16, stdev=3073.25, samples=19 00:40:54.452 iops : min= 302, max= 364, avg=327.37, stdev=24.01, samples=19 00:40:54.452 lat (msec) : 10=80.61%, 20=19.39% 00:40:54.452 cpu : usr=94.80%, sys=4.96%, ctx=21, majf=0, minf=132 00:40:54.452 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:54.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.452 issued rwts: total=3286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.453 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:54.453 filename0: (groupid=0, jobs=1): err= 0: pid=3151137: Wed Nov 20 06:51:14 2024 00:40:54.453 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(364MiB/10047msec) 00:40:54.453 slat (nsec): min=5876, max=31423, avg=8698.80, stdev=1374.85 00:40:54.453 clat (usec): min=7252, max=50009, avg=10328.52, stdev=1313.63 00:40:54.453 lat (usec): min=7261, max=50015, avg=10337.22, stdev=1313.53 00:40:54.453 clat percentiles (usec): 00:40:54.453 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:40:54.453 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:40:54.453 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:40:54.453 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13173], 99.95th=[46924], 00:40:54.453 | 99.99th=[50070] 00:40:54.453 bw ( KiB/s): min=36096, max=38912, per=31.98%, avg=37235.20, stdev=754.29, samples=20 00:40:54.453 iops : min= 282, max= 304, avg=290.90, stdev= 5.89, samples=20 00:40:54.453 lat (msec) : 10=36.72%, 20=63.21%, 50=0.03%, 100=0.03% 00:40:54.453 cpu : usr=94.58%, sys=5.16%, ctx=18, majf=0, minf=97 00:40:54.453 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:54.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.453 issued rwts: total=2911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.453 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:54.453 filename0: (groupid=0, jobs=1): err= 0: pid=3151138: Wed Nov 20 06:51:14 2024 00:40:54.453 read: IOPS=292, BW=36.6MiB/s (38.4MB/s)(368MiB/10045msec) 00:40:54.453 slat (nsec): min=5918, max=31939, avg=6696.87, stdev=1026.72 00:40:54.453 clat (usec): min=7406, max=51985, avg=10220.05, stdev=2264.36 00:40:54.453 lat (usec): min=7413, max=51992, avg=10226.75, stdev=2264.40 00:40:54.453 clat percentiles (usec): 00:40:54.453 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:40:54.453 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:40:54.453 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:40:54.453 | 99.00th=[12125], 99.50th=[12649], 99.90th=[51643], 99.95th=[51643], 00:40:54.453 | 99.99th=[52167] 00:40:54.453 bw ( KiB/s): min=33536, max=39168, per=32.32%, avg=37632.00, stdev=1382.35, samples=20 00:40:54.453 iops : min= 262, max= 306, avg=294.00, stdev=10.80, samples=20 00:40:54.453 lat (msec) : 10=46.09%, 20=53.64%, 50=0.03%, 100=0.24% 00:40:54.453 cpu : usr=94.40%, sys=5.36%, ctx=14, majf=0, minf=149 00:40:54.453 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:54.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.453 issued rwts: total=2942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.453 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:54.453 00:40:54.453 Run status group 0 (all jobs): 00:40:54.453 READ: bw=114MiB/s (119MB/s), 36.2MiB/s-41.0MiB/s (38.0MB/s-43.0MB/s), io=1142MiB (1198MB), run=10006-10047msec 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.712 00:40:54.712 real 0m11.159s 00:40:54.712 user 0m43.017s 00:40:54.712 sys 0m1.860s 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:54.712 06:51:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:54.712 ************************************ 00:40:54.712 END TEST fio_dif_digest 00:40:54.712 ************************************ 00:40:54.712 06:51:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:54.712 06:51:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:40:54.712 06:51:14 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:54.713 06:51:14 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:40:54.713 06:51:14 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:54.713 06:51:14 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:40:54.713 06:51:14 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:54.713 06:51:14 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:54.713 rmmod nvme_tcp 00:40:54.713 rmmod nvme_fabrics 00:40:54.713 rmmod nvme_keyring 00:40:54.713 06:51:14 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:54.713 06:51:14 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:40:54.713 06:51:14 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:40:54.713 06:51:14 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3140771 ']' 00:40:54.713 06:51:14 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3140771 00:40:54.713 06:51:14 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 3140771 ']' 00:40:54.713 06:51:14 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 3140771 00:40:54.713 06:51:14 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:40:54.713 06:51:14 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:54.713 06:51:14 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3140771 00:40:54.973 06:51:15 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:54.973 06:51:15 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:54.973 06:51:15 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3140771' 00:40:54.973 killing process with pid 3140771 00:40:54.973 06:51:15 nvmf_dif -- common/autotest_common.sh@971 -- # kill 3140771 00:40:54.973 06:51:15 nvmf_dif -- common/autotest_common.sh@976 -- # wait 3140771 00:40:54.973 06:51:15 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:40:54.973 06:51:15 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:58.269 Waiting for block devices as requested 00:40:58.269 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:58.529 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:58.529 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:58.529 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:58.790 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:58.790 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:58.790 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:59.050 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:59.050 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:59.309 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:59.309 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:59.309 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:59.569 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:59.569 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:59.569 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:59.829 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:59.829 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:00.089 06:51:20 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:00.089 06:51:20 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:00.089 06:51:20 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:00.089 06:51:20 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:00.089 06:51:20 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:00.089 06:51:20 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:00.089 06:51:20 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:00.089 06:51:20 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:00.089 06:51:20 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.089 06:51:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:00.089 06:51:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:02.632 06:51:22 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:02.632 00:41:02.632 real 1m18.755s 00:41:02.632 user 8m6.708s 00:41:02.632 sys 0m21.726s 00:41:02.632 06:51:22 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:02.632 06:51:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:02.632 ************************************ 00:41:02.632 END TEST nvmf_dif 00:41:02.632 ************************************ 00:41:02.632 06:51:22 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:02.632 06:51:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:02.632 06:51:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:02.632 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:41:02.632 ************************************ 00:41:02.632 START TEST nvmf_abort_qd_sizes 00:41:02.632 ************************************ 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:02.632 * Looking for test storage... 00:41:02.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:02.632 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:02.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.633 --rc genhtml_branch_coverage=1 00:41:02.633 --rc genhtml_function_coverage=1 00:41:02.633 --rc genhtml_legend=1 00:41:02.633 --rc geninfo_all_blocks=1 00:41:02.633 --rc geninfo_unexecuted_blocks=1 00:41:02.633 00:41:02.633 ' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:02.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.633 --rc genhtml_branch_coverage=1 00:41:02.633 --rc genhtml_function_coverage=1 00:41:02.633 --rc genhtml_legend=1 00:41:02.633 --rc geninfo_all_blocks=1 00:41:02.633 --rc geninfo_unexecuted_blocks=1 00:41:02.633 00:41:02.633 ' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:02.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.633 --rc genhtml_branch_coverage=1 00:41:02.633 --rc genhtml_function_coverage=1 00:41:02.633 --rc genhtml_legend=1 00:41:02.633 --rc geninfo_all_blocks=1 00:41:02.633 --rc geninfo_unexecuted_blocks=1 00:41:02.633 00:41:02.633 ' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:02.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.633 --rc genhtml_branch_coverage=1 00:41:02.633 --rc genhtml_function_coverage=1 00:41:02.633 --rc genhtml_legend=1 00:41:02.633 --rc geninfo_all_blocks=1 00:41:02.633 --rc geninfo_unexecuted_blocks=1 00:41:02.633 00:41:02.633 ' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:02.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:02.633 06:51:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:10.773 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:10.773 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:10.773 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:10.773 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:10.773 06:51:29 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:10.773 06:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:10.773 06:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:10.773 06:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:10.774 06:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:10.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:10.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:41:10.774 00:41:10.774 --- 10.0.0.2 ping statistics --- 00:41:10.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:10.774 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:41:10.774 06:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:10.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:10.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:41:10.774 00:41:10.774 --- 10.0.0.1 ping statistics --- 00:41:10.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:10.774 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:41:10.774 06:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:10.774 06:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:10.774 06:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:10.774 06:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:13.320 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:13.320 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:13.581 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:13.842 06:51:33 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:13.842 06:51:33 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:13.842 06:51:33 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:13.842 06:51:33 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:13.842 06:51:33 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:13.842 06:51:33 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:13.842 06:51:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3161124 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3161124 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 3161124 ']' 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:13.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:13.843 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:13.843 [2024-11-20 06:51:34.067360] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:41:13.843 [2024-11-20 06:51:34.067424] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:14.104 [2024-11-20 06:51:34.170443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:14.104 [2024-11-20 06:51:34.225951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:14.104 [2024-11-20 06:51:34.226009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:14.104 [2024-11-20 06:51:34.226019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:14.104 [2024-11-20 06:51:34.226026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:14.104 [2024-11-20 06:51:34.226032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:14.104 [2024-11-20 06:51:34.228411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.104 [2024-11-20 06:51:34.228683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:14.104 [2024-11-20 06:51:34.228849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:14.105 [2024-11-20 06:51:34.228851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:14.677 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:14.678 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:41:14.678 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:14.678 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:14.678 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:14.678 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:14.678 06:51:34 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:41:14.678 06:51:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:14.678 06:51:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:41:14.938 06:51:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:14.938 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:14.938 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:14.938 06:51:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:14.938 ************************************ 00:41:14.938 START TEST spdk_target_abort 00:41:14.938 ************************************ 00:41:14.938 06:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:41:14.938 06:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:14.938 06:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:41:14.938 06:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.938 06:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:15.199 spdk_targetn1 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:15.199 [2024-11-20 06:51:35.304027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:15.199 [2024-11-20 06:51:35.352366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:15.199 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:15.200 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:15.200 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:15.200 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:15.200 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:15.200 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:15.200 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:15.200 06:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:15.461 [2024-11-20 06:51:35.540682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:272 len:8 PRP1 0x200004abe000 PRP2 0x0 00:41:15.461 [2024-11-20 06:51:35.540718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0023 p:1 m:0 dnr:0 00:41:15.461 [2024-11-20 06:51:35.541003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:296 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:41:15.461 [2024-11-20 06:51:35.541019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0026 p:1 m:0 dnr:0 00:41:15.461 [2024-11-20 06:51:35.548760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:496 len:8 PRP1 0x200004abe000 PRP2 0x0 00:41:15.461 [2024-11-20 06:51:35.548779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:41:15.461 [2024-11-20 06:51:35.564759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:976 len:8 PRP1 0x200004abe000 PRP2 0x0 00:41:15.461 [2024-11-20 06:51:35.564779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:007b p:1 m:0 dnr:0 00:41:15.461 [2024-11-20 06:51:35.598600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2112 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:41:15.461 [2024-11-20 06:51:35.598622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:18.760 Initializing NVMe Controllers 00:41:18.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:18.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:18.760 Initialization complete. Launching workers. 00:41:18.760 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12803, failed: 5 00:41:18.760 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2754, failed to submit 10054 00:41:18.760 success 821, unsuccessful 1933, failed 0 00:41:18.760 06:51:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:18.760 06:51:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:18.760 [2024-11-20 06:51:38.765348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:640 len:8 PRP1 0x200004e58000 PRP2 0x0 00:41:18.760 [2024-11-20 06:51:38.765392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:41:18.760 [2024-11-20 06:51:38.781282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:1008 len:8 PRP1 0x200004e56000 PRP2 0x0 00:41:18.760 [2024-11-20 06:51:38.781308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0080 p:1 m:0 dnr:0 00:41:18.760 [2024-11-20 06:51:38.804937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:1576 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:41:18.761 [2024-11-20 06:51:38.804960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:00c8 p:1 m:0 dnr:0 00:41:18.761 [2024-11-20 06:51:38.837363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2272 len:8 PRP1 0x200004e3c000 PRP2 0x0 00:41:18.761 [2024-11-20 06:51:38.837386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:18.761 [2024-11-20 06:51:38.853109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:2616 len:8 PRP1 0x200004e54000 PRP2 0x0 00:41:18.761 [2024-11-20 06:51:38.853130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:18.761 [2024-11-20 06:51:38.858432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:2800 len:8 PRP1 0x200004e46000 PRP2 0x0 00:41:18.761 [2024-11-20 06:51:38.858454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:18.761 [2024-11-20 06:51:38.884468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:3384 len:8 PRP1 0x200004e3c000 PRP2 0x0 00:41:18.761 [2024-11-20 06:51:38.884489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:41:19.022 [2024-11-20 06:51:39.227538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:11568 len:8 PRP1 0x200004e54000 PRP2 0x0 00:41:19.022 [2024-11-20 06:51:39.227566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00aa p:0 m:0 dnr:0 00:41:19.961 [2024-11-20 06:51:39.907331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:26840 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:41:19.961 [2024-11-20 06:51:39.907362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:20.588 [2024-11-20 06:51:40.753016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:46440 len:8 PRP1 0x200004e64000 PRP2 0x0 00:41:20.588 [2024-11-20 06:51:40.753042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:00b0 p:1 m:0 dnr:0 00:41:20.889 [2024-11-20 06:51:41.074995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:53672 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:41:20.889 [2024-11-20 06:51:41.075018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0043 p:1 m:0 dnr:0 00:41:21.858 Initializing NVMe Controllers 00:41:21.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:21.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:21.858 Initialization complete. Launching workers. 00:41:21.858 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8585, failed: 11 00:41:21.858 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1198, failed to submit 7398 00:41:21.858 success 347, unsuccessful 851, failed 0 00:41:21.858 06:51:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:21.858 06:51:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:21.858 [2024-11-20 06:51:42.089806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:155 nsid:1 lba:1952 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:41:21.858 [2024-11-20 06:51:42.089831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:155 cdw0:0 sqhd:0008 p:1 m:0 dnr:0 00:41:22.428 [2024-11-20 06:51:42.570544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:151 nsid:1 lba:57640 len:8 PRP1 0x200004af2000 PRP2 0x0 00:41:22.428 [2024-11-20 06:51:42.570568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:151 cdw0:0 sqhd:002e p:1 m:0 dnr:0 00:41:23.368 [2024-11-20 06:51:43.441482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:158200 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:41:23.368 [2024-11-20 06:51:43.441520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:24.307 [2024-11-20 06:51:44.420578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:154 nsid:1 lba:272936 len:8 PRP1 0x200004aea000 PRP2 0x0 00:41:24.307 [2024-11-20 06:51:44.420601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:154 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:24.878 Initializing NVMe Controllers 00:41:24.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:24.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:24.878 Initialization complete. Launching workers. 00:41:24.878 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43711, failed: 4 00:41:24.878 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2973, failed to submit 40742 00:41:24.878 success 592, unsuccessful 2381, failed 0 00:41:24.878 06:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:24.878 06:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.878 06:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:24.878 06:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.878 06:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:24.878 06:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.878 06:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:26.790 06:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.791 06:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3161124 00:41:26.791 06:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 3161124 ']' 00:41:26.791 06:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 3161124 00:41:26.791 06:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:41:26.791 06:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:26.791 06:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3161124 00:41:26.791 06:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:26.791 06:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:26.791 06:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3161124' 00:41:26.791 killing process with pid 3161124 00:41:26.791 06:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 3161124 00:41:26.791 06:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 3161124 00:41:27.052 00:41:27.052 real 0m12.148s 00:41:27.052 user 0m49.428s 00:41:27.052 sys 0m2.107s 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:27.052 ************************************ 00:41:27.052 END TEST spdk_target_abort 00:41:27.052 ************************************ 00:41:27.052 06:51:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:27.052 06:51:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:27.052 06:51:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:27.052 06:51:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:27.052 ************************************ 00:41:27.052 START TEST kernel_target_abort 00:41:27.052 ************************************ 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:27.052 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:27.053 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:41:27.053 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:27.053 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:27.053 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:27.053 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:41:27.053 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:41:27.053 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:41:27.053 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:27.053 06:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:30.351 Waiting for block devices as requested 00:41:30.351 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:30.351 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:30.612 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:30.612 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:30.612 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:30.872 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:30.873 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:30.873 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:31.133 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:31.133 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:31.394 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:31.394 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:31.394 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:31.654 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:31.654 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:31.654 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:31.914 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:32.176 No valid GPT data, bailing 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:41:32.176 00:41:32.176 Discovery Log Number of Records 2, Generation counter 2 00:41:32.176 =====Discovery Log Entry 0====== 00:41:32.176 trtype: tcp 00:41:32.176 adrfam: ipv4 00:41:32.176 subtype: current discovery subsystem 00:41:32.176 treq: not specified, sq flow control disable supported 00:41:32.176 portid: 1 00:41:32.176 trsvcid: 4420 00:41:32.176 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:32.176 traddr: 10.0.0.1 00:41:32.176 eflags: none 00:41:32.176 sectype: none 00:41:32.176 =====Discovery Log Entry 1====== 00:41:32.176 trtype: tcp 00:41:32.176 adrfam: ipv4 00:41:32.176 subtype: nvme subsystem 00:41:32.176 treq: not specified, sq flow control disable supported 00:41:32.176 portid: 1 00:41:32.176 trsvcid: 4420 00:41:32.176 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:32.176 traddr: 10.0.0.1 00:41:32.176 eflags: none 00:41:32.176 sectype: none 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:32.176 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:32.177 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:32.177 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:32.177 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:32.177 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:32.177 06:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:35.479 Initializing NVMe Controllers 00:41:35.479 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:35.479 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:35.479 Initialization complete. Launching workers. 00:41:35.479 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68264, failed: 0 00:41:35.479 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 68264, failed to submit 0 00:41:35.479 success 0, unsuccessful 68264, failed 0 00:41:35.479 06:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:35.479 06:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:38.795 Initializing NVMe Controllers 00:41:38.795 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:38.795 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:38.795 Initialization complete. Launching workers. 00:41:38.795 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119063, failed: 0 00:41:38.795 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29966, failed to submit 89097 00:41:38.795 success 0, unsuccessful 29966, failed 0 00:41:38.795 06:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:38.795 06:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:42.101 Initializing NVMe Controllers 00:41:42.101 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:42.101 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:42.101 Initialization complete. Launching workers. 00:41:42.101 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145562, failed: 0 00:41:42.101 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36426, failed to submit 109136 00:41:42.101 success 0, unsuccessful 36426, failed 0 00:41:42.101 06:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:41:42.101 06:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:42.101 06:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:41:42.101 06:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:42.101 06:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:42.101 06:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:42.101 06:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:42.101 06:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:41:42.101 06:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:41:42.101 06:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:45.398 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:45.398 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:47.311 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:47.311 00:41:47.311 real 0m20.220s 00:41:47.311 user 0m9.869s 00:41:47.311 sys 0m6.013s 00:41:47.311 06:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:47.311 06:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:47.311 ************************************ 00:41:47.311 END TEST kernel_target_abort 00:41:47.311 ************************************ 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:47.311 rmmod nvme_tcp 00:41:47.311 rmmod nvme_fabrics 00:41:47.311 rmmod nvme_keyring 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3161124 ']' 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3161124 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 3161124 ']' 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 3161124 00:41:47.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3161124) - No such process 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 3161124 is not found' 00:41:47.311 Process with pid 3161124 is not found 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:47.311 06:52:07 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:50.611 Waiting for block devices as requested 00:41:50.611 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:50.872 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:50.872 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:50.872 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:51.133 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:51.133 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:51.133 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:51.394 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:51.394 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:51.654 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:51.654 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:51.654 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:51.914 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:51.914 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:51.914 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:52.173 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:52.173 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:52.432 06:52:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:54.971 06:52:14 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:54.971 00:41:54.971 real 0m52.219s 00:41:54.971 user 1m4.790s 00:41:54.971 sys 0m19.079s 00:41:54.971 06:52:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:54.971 06:52:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:54.971 ************************************ 00:41:54.971 END TEST nvmf_abort_qd_sizes 00:41:54.971 ************************************ 00:41:54.971 06:52:14 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:54.971 06:52:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:54.971 06:52:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:54.971 06:52:14 -- common/autotest_common.sh@10 -- # set +x 00:41:54.971 ************************************ 00:41:54.971 START TEST keyring_file 00:41:54.971 ************************************ 00:41:54.972 06:52:14 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:54.972 * Looking for test storage... 00:41:54.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:54.972 06:52:14 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:54.972 06:52:14 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:41:54.972 06:52:14 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:54.972 06:52:14 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@345 -- # : 1 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@353 -- # local d=1 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@355 -- # echo 1 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@353 -- # local d=2 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@355 -- # echo 2 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@368 -- # return 0 00:41:54.972 06:52:14 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:54.972 06:52:14 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:54.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.972 --rc genhtml_branch_coverage=1 00:41:54.972 --rc genhtml_function_coverage=1 00:41:54.972 --rc genhtml_legend=1 00:41:54.972 --rc geninfo_all_blocks=1 00:41:54.972 --rc geninfo_unexecuted_blocks=1 00:41:54.972 00:41:54.972 ' 00:41:54.972 06:52:14 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:54.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.972 --rc genhtml_branch_coverage=1 00:41:54.972 --rc genhtml_function_coverage=1 00:41:54.972 --rc genhtml_legend=1 00:41:54.972 --rc geninfo_all_blocks=1 00:41:54.972 --rc geninfo_unexecuted_blocks=1 00:41:54.972 00:41:54.972 ' 00:41:54.972 06:52:14 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:54.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.972 --rc genhtml_branch_coverage=1 00:41:54.972 --rc genhtml_function_coverage=1 00:41:54.972 --rc genhtml_legend=1 00:41:54.972 --rc geninfo_all_blocks=1 00:41:54.972 --rc geninfo_unexecuted_blocks=1 00:41:54.972 00:41:54.972 ' 00:41:54.972 06:52:14 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:54.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.972 --rc genhtml_branch_coverage=1 00:41:54.972 --rc genhtml_function_coverage=1 00:41:54.972 --rc genhtml_legend=1 00:41:54.972 --rc geninfo_all_blocks=1 00:41:54.972 --rc geninfo_unexecuted_blocks=1 00:41:54.972 00:41:54.972 ' 00:41:54.972 06:52:14 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:54.972 06:52:14 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:54.972 06:52:14 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:54.972 06:52:14 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.972 06:52:14 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.972 06:52:14 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.972 06:52:14 keyring_file -- paths/export.sh@5 -- # export PATH 00:41:54.972 06:52:14 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@51 -- # : 0 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:54.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:54.972 06:52:14 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:54.972 06:52:14 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:54.972 06:52:14 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:54.972 06:52:14 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:41:54.972 06:52:14 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:41:54.972 06:52:14 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:41:54.972 06:52:14 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:54.972 06:52:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:54.972 06:52:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:54.972 06:52:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:54.972 06:52:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:54.972 06:52:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:54.972 06:52:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1R2CuWeweq 00:41:54.972 06:52:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:54.972 06:52:14 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:54.972 06:52:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1R2CuWeweq 00:41:54.972 06:52:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1R2CuWeweq 00:41:54.972 06:52:15 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.1R2CuWeweq 00:41:54.973 06:52:15 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:41:54.973 06:52:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:54.973 06:52:15 keyring_file -- keyring/common.sh@17 -- # name=key1 00:41:54.973 06:52:15 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:54.973 06:52:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:54.973 06:52:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:54.973 06:52:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uLoQhJuhU4 00:41:54.973 06:52:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:54.973 06:52:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:54.973 06:52:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:54.973 06:52:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:54.973 06:52:15 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:41:54.973 06:52:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:54.973 06:52:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:54.973 06:52:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uLoQhJuhU4 00:41:54.973 06:52:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uLoQhJuhU4 00:41:54.973 06:52:15 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uLoQhJuhU4 00:41:54.973 06:52:15 keyring_file -- keyring/file.sh@30 -- # tgtpid=3171487 00:41:54.973 06:52:15 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3171487 00:41:54.973 06:52:15 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:54.973 06:52:15 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3171487 ']' 00:41:54.973 06:52:15 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:54.973 06:52:15 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:54.973 06:52:15 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:54.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:54.973 06:52:15 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:54.973 06:52:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:54.973 [2024-11-20 06:52:15.154758] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:41:54.973 [2024-11-20 06:52:15.154813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171487 ] 00:41:54.973 [2024-11-20 06:52:15.243430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:55.232 [2024-11-20 06:52:15.280487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:55.803 06:52:15 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:55.803 06:52:15 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:41:55.803 06:52:15 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:41:55.803 06:52:15 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.803 06:52:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:55.803 [2024-11-20 06:52:15.949366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:55.803 null0 00:41:55.803 [2024-11-20 06:52:15.981416] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:55.803 [2024-11-20 06:52:15.981777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.803 06:52:16 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:55.803 [2024-11-20 06:52:16.013473] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:41:55.803 request: 00:41:55.803 { 00:41:55.803 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:41:55.803 "secure_channel": false, 00:41:55.803 "listen_address": { 00:41:55.803 "trtype": "tcp", 00:41:55.803 "traddr": "127.0.0.1", 00:41:55.803 "trsvcid": "4420" 00:41:55.803 }, 00:41:55.803 "method": "nvmf_subsystem_add_listener", 00:41:55.803 "req_id": 1 00:41:55.803 } 00:41:55.803 Got JSON-RPC error response 00:41:55.803 response: 00:41:55.803 { 00:41:55.803 "code": -32602, 00:41:55.803 "message": "Invalid parameters" 00:41:55.803 } 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:55.803 06:52:16 keyring_file -- keyring/file.sh@47 -- # bperfpid=3171534 00:41:55.803 06:52:16 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3171534 /var/tmp/bperf.sock 00:41:55.803 06:52:16 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3171534 ']' 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:55.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:55.803 06:52:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:55.803 [2024-11-20 06:52:16.075318] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:41:55.803 [2024-11-20 06:52:16.075378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171534 ] 00:41:56.064 [2024-11-20 06:52:16.166433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:56.064 [2024-11-20 06:52:16.219752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:56.634 06:52:16 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:56.635 06:52:16 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:41:56.635 06:52:16 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1R2CuWeweq 00:41:56.635 06:52:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1R2CuWeweq 00:41:56.907 06:52:17 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uLoQhJuhU4 00:41:56.907 06:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uLoQhJuhU4 00:41:57.172 06:52:17 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:41:57.172 06:52:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:41:57.172 06:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:57.172 06:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:57.172 06:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:57.172 06:52:17 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.1R2CuWeweq == \/\t\m\p\/\t\m\p\.\1\R\2\C\u\W\e\w\e\q ]] 00:41:57.172 06:52:17 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:41:57.172 06:52:17 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:41:57.172 06:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:57.172 06:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:57.172 06:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:57.432 06:52:17 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.uLoQhJuhU4 == \/\t\m\p\/\t\m\p\.\u\L\o\Q\h\J\u\h\U\4 ]] 00:41:57.432 06:52:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:41:57.432 06:52:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:57.432 06:52:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:57.432 06:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:57.432 06:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:57.432 06:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:57.693 06:52:17 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:41:57.693 06:52:17 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:41:57.693 06:52:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:57.693 06:52:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:57.693 06:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:57.693 06:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:57.693 06:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:57.954 06:52:17 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:41:57.954 06:52:17 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:57.954 06:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:57.954 [2024-11-20 06:52:18.130152] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:57.954 nvme0n1 00:41:57.954 06:52:18 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:41:57.954 06:52:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:57.954 06:52:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:58.215 06:52:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:58.215 06:52:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:58.215 06:52:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:58.215 06:52:18 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:41:58.215 06:52:18 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:41:58.215 06:52:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:58.215 06:52:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:58.215 06:52:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:58.215 06:52:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:58.215 06:52:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:58.476 06:52:18 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:41:58.476 06:52:18 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:58.476 Running I/O for 1 seconds... 00:41:59.859 18681.00 IOPS, 72.97 MiB/s 00:41:59.859 Latency(us) 00:41:59.859 [2024-11-20T05:52:20.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:59.859 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:41:59.859 nvme0n1 : 1.00 18740.24 73.20 0.00 0.00 6817.93 3549.87 13871.79 00:41:59.859 [2024-11-20T05:52:20.138Z] =================================================================================================================== 00:41:59.859 [2024-11-20T05:52:20.138Z] Total : 18740.24 73.20 0.00 0.00 6817.93 3549.87 13871.79 00:41:59.859 { 00:41:59.859 "results": [ 00:41:59.859 { 00:41:59.859 "job": "nvme0n1", 00:41:59.859 "core_mask": "0x2", 00:41:59.859 "workload": "randrw", 00:41:59.859 "percentage": 50, 00:41:59.859 "status": "finished", 00:41:59.859 "queue_depth": 128, 00:41:59.859 "io_size": 4096, 00:41:59.859 "runtime": 1.003669, 00:41:59.859 "iops": 18740.242051911537, 00:41:59.859 "mibps": 73.20407051527944, 00:41:59.859 "io_failed": 0, 00:41:59.859 "io_timeout": 0, 00:41:59.859 "avg_latency_us": 6817.933440374289, 00:41:59.859 "min_latency_us": 3549.866666666667, 00:41:59.859 "max_latency_us": 13871.786666666667 00:41:59.859 } 00:41:59.859 ], 00:41:59.859 "core_count": 1 00:41:59.859 } 00:41:59.859 06:52:19 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:59.859 06:52:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:59.859 06:52:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:41:59.859 06:52:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:59.859 06:52:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:59.859 06:52:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:59.859 06:52:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:59.859 06:52:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:59.859 06:52:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:41:59.859 06:52:20 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:41:59.859 06:52:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:59.859 06:52:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:59.859 06:52:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:59.859 06:52:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:59.859 06:52:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:00.120 06:52:20 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:00.120 06:52:20 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:00.120 06:52:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:00.120 06:52:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:00.120 06:52:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:00.120 06:52:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:00.120 06:52:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:00.120 06:52:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:00.120 06:52:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:00.120 06:52:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:00.381 [2024-11-20 06:52:20.425977] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:00.381 [2024-11-20 06:52:20.426738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf89740 (107): Transport endpoint is not connected 00:42:00.381 [2024-11-20 06:52:20.427734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf89740 (9): Bad file descriptor 00:42:00.381 [2024-11-20 06:52:20.428736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:00.381 [2024-11-20 06:52:20.428744] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:00.381 [2024-11-20 06:52:20.428750] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:00.381 [2024-11-20 06:52:20.428757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:00.381 request: 00:42:00.381 { 00:42:00.381 "name": "nvme0", 00:42:00.381 "trtype": "tcp", 00:42:00.381 "traddr": "127.0.0.1", 00:42:00.381 "adrfam": "ipv4", 00:42:00.381 "trsvcid": "4420", 00:42:00.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:00.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:00.381 "prchk_reftag": false, 00:42:00.381 "prchk_guard": false, 00:42:00.381 "hdgst": false, 00:42:00.381 "ddgst": false, 00:42:00.381 "psk": "key1", 00:42:00.381 "allow_unrecognized_csi": false, 00:42:00.381 "method": "bdev_nvme_attach_controller", 00:42:00.381 "req_id": 1 00:42:00.381 } 00:42:00.381 Got JSON-RPC error response 00:42:00.381 response: 00:42:00.381 { 00:42:00.381 "code": -5, 00:42:00.381 "message": "Input/output error" 00:42:00.381 } 00:42:00.381 06:52:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:00.381 06:52:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:00.381 06:52:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:00.381 06:52:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:00.381 06:52:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:00.381 06:52:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:00.381 06:52:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:00.381 06:52:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:00.381 06:52:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:00.381 06:52:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:00.381 06:52:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:00.381 06:52:20 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:00.381 06:52:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:00.381 06:52:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:00.381 06:52:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:00.381 06:52:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:00.381 06:52:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:00.641 06:52:20 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:00.641 06:52:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:00.641 06:52:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:00.900 06:52:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:00.900 06:52:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:00.900 06:52:21 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:00.900 06:52:21 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:00.900 06:52:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:01.159 06:52:21 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:01.159 06:52:21 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.1R2CuWeweq 00:42:01.159 06:52:21 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.1R2CuWeweq 00:42:01.159 06:52:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:01.160 06:52:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.1R2CuWeweq 00:42:01.160 06:52:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:01.160 06:52:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.160 06:52:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:01.160 06:52:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.160 06:52:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1R2CuWeweq 00:42:01.160 06:52:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1R2CuWeweq 00:42:01.419 [2024-11-20 06:52:21.509544] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1R2CuWeweq': 0100660 00:42:01.419 [2024-11-20 06:52:21.509564] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:01.419 request: 00:42:01.419 { 00:42:01.419 "name": "key0", 00:42:01.419 "path": "/tmp/tmp.1R2CuWeweq", 00:42:01.419 "method": "keyring_file_add_key", 00:42:01.419 "req_id": 1 00:42:01.419 } 00:42:01.419 Got JSON-RPC error response 00:42:01.419 response: 00:42:01.419 { 00:42:01.419 "code": -1, 00:42:01.419 "message": "Operation not permitted" 00:42:01.419 } 00:42:01.419 06:52:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:01.419 06:52:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:01.419 06:52:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:01.419 06:52:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:01.419 06:52:21 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.1R2CuWeweq 00:42:01.419 06:52:21 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1R2CuWeweq 00:42:01.419 06:52:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1R2CuWeweq 00:42:01.679 06:52:21 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.1R2CuWeweq 00:42:01.679 06:52:21 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:01.679 06:52:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:01.679 06:52:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:01.679 06:52:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:01.679 06:52:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:01.679 06:52:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:01.679 06:52:21 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:01.679 06:52:21 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:01.679 06:52:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:01.679 06:52:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:01.679 06:52:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:01.679 06:52:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.679 06:52:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:01.679 06:52:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.679 06:52:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:01.679 06:52:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:01.939 [2024-11-20 06:52:22.082998] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.1R2CuWeweq': No such file or directory 00:42:01.939 [2024-11-20 06:52:22.083011] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:01.939 [2024-11-20 06:52:22.083025] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:01.939 [2024-11-20 06:52:22.083036] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:01.939 [2024-11-20 06:52:22.083042] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:01.939 [2024-11-20 06:52:22.083046] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:01.939 request: 00:42:01.939 { 00:42:01.939 "name": "nvme0", 00:42:01.939 "trtype": "tcp", 00:42:01.939 "traddr": "127.0.0.1", 00:42:01.939 "adrfam": "ipv4", 00:42:01.939 "trsvcid": "4420", 00:42:01.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:01.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:01.939 "prchk_reftag": false, 00:42:01.939 "prchk_guard": false, 00:42:01.939 "hdgst": false, 00:42:01.939 "ddgst": false, 00:42:01.939 "psk": "key0", 00:42:01.939 "allow_unrecognized_csi": false, 00:42:01.939 "method": "bdev_nvme_attach_controller", 00:42:01.939 "req_id": 1 00:42:01.939 } 00:42:01.939 Got JSON-RPC error response 00:42:01.939 response: 00:42:01.939 { 00:42:01.939 "code": -19, 00:42:01.939 "message": "No such device" 00:42:01.939 } 00:42:01.940 06:52:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:01.940 06:52:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:01.940 06:52:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:01.940 06:52:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:01.940 06:52:22 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:01.940 06:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:02.200 06:52:22 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:02.200 06:52:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:02.200 06:52:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:02.200 06:52:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:02.200 06:52:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:02.200 06:52:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:02.200 06:52:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pzQs2Sj7BK 00:42:02.200 06:52:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:02.200 06:52:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:02.200 06:52:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:02.200 06:52:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:02.200 06:52:22 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:02.200 06:52:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:02.200 06:52:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:02.200 06:52:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pzQs2Sj7BK 00:42:02.200 06:52:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pzQs2Sj7BK 00:42:02.200 06:52:22 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.pzQs2Sj7BK 00:42:02.200 06:52:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pzQs2Sj7BK 00:42:02.200 06:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pzQs2Sj7BK 00:42:02.461 06:52:22 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:02.461 06:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:02.461 nvme0n1 00:42:02.461 06:52:22 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:02.461 06:52:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:02.461 06:52:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:02.461 06:52:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:02.461 06:52:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:02.461 06:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:02.721 06:52:22 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:02.721 06:52:22 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:02.721 06:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:02.981 06:52:23 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:02.981 06:52:23 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:02.981 06:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:02.981 06:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:02.982 06:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:02.982 06:52:23 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:02.982 06:52:23 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:02.982 06:52:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:02.982 06:52:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:02.982 06:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:02.982 06:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:02.982 06:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:03.241 06:52:23 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:03.241 06:52:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:03.241 06:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:03.502 06:52:23 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:03.502 06:52:23 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:03.502 06:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:03.763 06:52:23 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:03.763 06:52:23 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pzQs2Sj7BK 00:42:03.763 06:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pzQs2Sj7BK 00:42:03.763 06:52:23 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uLoQhJuhU4 00:42:03.763 06:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uLoQhJuhU4 00:42:04.022 06:52:24 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:04.022 06:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:04.281 nvme0n1 00:42:04.281 06:52:24 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:04.281 06:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:04.542 06:52:24 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:04.542 "subsystems": [ 00:42:04.542 { 00:42:04.542 "subsystem": "keyring", 00:42:04.542 "config": [ 00:42:04.542 { 00:42:04.542 "method": "keyring_file_add_key", 00:42:04.542 "params": { 00:42:04.542 "name": "key0", 00:42:04.542 "path": "/tmp/tmp.pzQs2Sj7BK" 00:42:04.542 } 00:42:04.542 }, 00:42:04.542 { 00:42:04.542 "method": "keyring_file_add_key", 00:42:04.542 "params": { 00:42:04.542 "name": "key1", 00:42:04.542 "path": "/tmp/tmp.uLoQhJuhU4" 00:42:04.542 } 00:42:04.542 } 00:42:04.542 ] 00:42:04.542 }, 00:42:04.542 { 00:42:04.542 "subsystem": "iobuf", 00:42:04.542 "config": [ 00:42:04.542 { 00:42:04.542 "method": "iobuf_set_options", 00:42:04.542 "params": { 00:42:04.542 "small_pool_count": 8192, 00:42:04.542 "large_pool_count": 1024, 00:42:04.542 "small_bufsize": 8192, 00:42:04.542 "large_bufsize": 135168, 00:42:04.542 "enable_numa": false 00:42:04.542 } 00:42:04.542 } 00:42:04.542 ] 00:42:04.542 }, 00:42:04.542 { 00:42:04.542 "subsystem": "sock", 00:42:04.542 "config": [ 00:42:04.542 { 00:42:04.542 "method": "sock_set_default_impl", 00:42:04.542 "params": { 00:42:04.542 "impl_name": "posix" 00:42:04.542 } 00:42:04.542 }, 00:42:04.542 { 00:42:04.542 "method": "sock_impl_set_options", 00:42:04.542 "params": { 00:42:04.542 "impl_name": "ssl", 00:42:04.542 "recv_buf_size": 4096, 00:42:04.542 "send_buf_size": 4096, 00:42:04.542 "enable_recv_pipe": true, 00:42:04.542 "enable_quickack": false, 00:42:04.542 "enable_placement_id": 0, 00:42:04.542 "enable_zerocopy_send_server": true, 00:42:04.542 "enable_zerocopy_send_client": false, 00:42:04.542 "zerocopy_threshold": 0, 00:42:04.542 "tls_version": 0, 00:42:04.542 "enable_ktls": false 00:42:04.542 } 00:42:04.542 }, 00:42:04.542 { 00:42:04.542 "method": "sock_impl_set_options", 00:42:04.542 "params": { 00:42:04.542 "impl_name": "posix", 00:42:04.542 "recv_buf_size": 2097152, 00:42:04.542 "send_buf_size": 2097152, 00:42:04.542 "enable_recv_pipe": true, 00:42:04.542 "enable_quickack": false, 00:42:04.542 "enable_placement_id": 0, 00:42:04.542 "enable_zerocopy_send_server": true, 00:42:04.542 "enable_zerocopy_send_client": false, 00:42:04.542 "zerocopy_threshold": 0, 00:42:04.542 "tls_version": 0, 00:42:04.542 "enable_ktls": false 00:42:04.542 } 00:42:04.542 } 00:42:04.542 ] 00:42:04.542 }, 00:42:04.542 { 00:42:04.542 "subsystem": "vmd", 00:42:04.542 "config": [] 00:42:04.542 }, 00:42:04.542 { 00:42:04.542 "subsystem": "accel", 00:42:04.542 "config": [ 00:42:04.542 { 00:42:04.543 "method": "accel_set_options", 00:42:04.543 "params": { 00:42:04.543 "small_cache_size": 128, 00:42:04.543 "large_cache_size": 16, 00:42:04.543 "task_count": 2048, 00:42:04.543 "sequence_count": 2048, 00:42:04.543 "buf_count": 2048 00:42:04.543 } 00:42:04.543 } 00:42:04.543 ] 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "subsystem": "bdev", 00:42:04.543 "config": [ 00:42:04.543 { 00:42:04.543 "method": "bdev_set_options", 00:42:04.543 "params": { 00:42:04.543 "bdev_io_pool_size": 65535, 00:42:04.543 "bdev_io_cache_size": 256, 00:42:04.543 "bdev_auto_examine": true, 00:42:04.543 "iobuf_small_cache_size": 128, 00:42:04.543 "iobuf_large_cache_size": 16 00:42:04.543 } 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "method": "bdev_raid_set_options", 00:42:04.543 "params": { 00:42:04.543 "process_window_size_kb": 1024, 00:42:04.543 "process_max_bandwidth_mb_sec": 0 00:42:04.543 } 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "method": "bdev_iscsi_set_options", 00:42:04.543 "params": { 00:42:04.543 "timeout_sec": 30 00:42:04.543 } 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "method": "bdev_nvme_set_options", 00:42:04.543 "params": { 00:42:04.543 "action_on_timeout": "none", 00:42:04.543 "timeout_us": 0, 00:42:04.543 "timeout_admin_us": 0, 00:42:04.543 "keep_alive_timeout_ms": 10000, 00:42:04.543 "arbitration_burst": 0, 00:42:04.543 "low_priority_weight": 0, 00:42:04.543 "medium_priority_weight": 0, 00:42:04.543 "high_priority_weight": 0, 00:42:04.543 "nvme_adminq_poll_period_us": 10000, 00:42:04.543 "nvme_ioq_poll_period_us": 0, 00:42:04.543 "io_queue_requests": 512, 00:42:04.543 "delay_cmd_submit": true, 00:42:04.543 "transport_retry_count": 4, 00:42:04.543 "bdev_retry_count": 3, 00:42:04.543 "transport_ack_timeout": 0, 00:42:04.543 "ctrlr_loss_timeout_sec": 0, 00:42:04.543 "reconnect_delay_sec": 0, 00:42:04.543 "fast_io_fail_timeout_sec": 0, 00:42:04.543 "disable_auto_failback": false, 00:42:04.543 "generate_uuids": false, 00:42:04.543 "transport_tos": 0, 00:42:04.543 "nvme_error_stat": false, 00:42:04.543 "rdma_srq_size": 0, 00:42:04.543 "io_path_stat": false, 00:42:04.543 "allow_accel_sequence": false, 00:42:04.543 "rdma_max_cq_size": 0, 00:42:04.543 "rdma_cm_event_timeout_ms": 0, 00:42:04.543 "dhchap_digests": [ 00:42:04.543 "sha256", 00:42:04.543 "sha384", 00:42:04.543 "sha512" 00:42:04.543 ], 00:42:04.543 "dhchap_dhgroups": [ 00:42:04.543 "null", 00:42:04.543 "ffdhe2048", 00:42:04.543 "ffdhe3072", 00:42:04.543 "ffdhe4096", 00:42:04.543 "ffdhe6144", 00:42:04.543 "ffdhe8192" 00:42:04.543 ] 00:42:04.543 } 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "method": "bdev_nvme_attach_controller", 00:42:04.543 "params": { 00:42:04.543 "name": "nvme0", 00:42:04.543 "trtype": "TCP", 00:42:04.543 "adrfam": "IPv4", 00:42:04.543 "traddr": "127.0.0.1", 00:42:04.543 "trsvcid": "4420", 00:42:04.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:04.543 "prchk_reftag": false, 00:42:04.543 "prchk_guard": false, 00:42:04.543 "ctrlr_loss_timeout_sec": 0, 00:42:04.543 "reconnect_delay_sec": 0, 00:42:04.543 "fast_io_fail_timeout_sec": 0, 00:42:04.543 "psk": "key0", 00:42:04.543 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:04.543 "hdgst": false, 00:42:04.543 "ddgst": false, 00:42:04.543 "multipath": "multipath" 00:42:04.543 } 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "method": "bdev_nvme_set_hotplug", 00:42:04.543 "params": { 00:42:04.543 "period_us": 100000, 00:42:04.543 "enable": false 00:42:04.543 } 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "method": "bdev_wait_for_examine" 00:42:04.543 } 00:42:04.543 ] 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "subsystem": "nbd", 00:42:04.543 "config": [] 00:42:04.543 } 00:42:04.543 ] 00:42:04.543 }' 00:42:04.543 06:52:24 keyring_file -- keyring/file.sh@115 -- # killprocess 3171534 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3171534 ']' 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3171534 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3171534 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3171534' 00:42:04.543 killing process with pid 3171534 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@971 -- # kill 3171534 00:42:04.543 Received shutdown signal, test time was about 1.000000 seconds 00:42:04.543 00:42:04.543 Latency(us) 00:42:04.543 [2024-11-20T05:52:24.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:04.543 [2024-11-20T05:52:24.822Z] =================================================================================================================== 00:42:04.543 [2024-11-20T05:52:24.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@976 -- # wait 3171534 00:42:04.543 06:52:24 keyring_file -- keyring/file.sh@118 -- # bperfpid=3173350 00:42:04.543 06:52:24 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3173350 /var/tmp/bperf.sock 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3173350 ']' 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:04.543 06:52:24 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:04.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:04.543 06:52:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:04.543 06:52:24 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:04.543 "subsystems": [ 00:42:04.543 { 00:42:04.543 "subsystem": "keyring", 00:42:04.543 "config": [ 00:42:04.543 { 00:42:04.543 "method": "keyring_file_add_key", 00:42:04.543 "params": { 00:42:04.543 "name": "key0", 00:42:04.543 "path": "/tmp/tmp.pzQs2Sj7BK" 00:42:04.543 } 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "method": "keyring_file_add_key", 00:42:04.543 "params": { 00:42:04.543 "name": "key1", 00:42:04.543 "path": "/tmp/tmp.uLoQhJuhU4" 00:42:04.543 } 00:42:04.543 } 00:42:04.543 ] 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "subsystem": "iobuf", 00:42:04.543 "config": [ 00:42:04.543 { 00:42:04.543 "method": "iobuf_set_options", 00:42:04.543 "params": { 00:42:04.543 "small_pool_count": 8192, 00:42:04.543 "large_pool_count": 1024, 00:42:04.543 "small_bufsize": 8192, 00:42:04.543 "large_bufsize": 135168, 00:42:04.543 "enable_numa": false 00:42:04.543 } 00:42:04.543 } 00:42:04.543 ] 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "subsystem": "sock", 00:42:04.543 "config": [ 00:42:04.543 { 00:42:04.543 "method": "sock_set_default_impl", 00:42:04.543 "params": { 00:42:04.543 "impl_name": "posix" 00:42:04.543 } 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "method": "sock_impl_set_options", 00:42:04.543 "params": { 00:42:04.543 "impl_name": "ssl", 00:42:04.543 "recv_buf_size": 4096, 00:42:04.543 "send_buf_size": 4096, 00:42:04.543 "enable_recv_pipe": true, 00:42:04.543 "enable_quickack": false, 00:42:04.543 "enable_placement_id": 0, 00:42:04.543 "enable_zerocopy_send_server": true, 00:42:04.543 "enable_zerocopy_send_client": false, 00:42:04.543 "zerocopy_threshold": 0, 00:42:04.543 "tls_version": 0, 00:42:04.543 "enable_ktls": false 00:42:04.543 } 00:42:04.543 }, 00:42:04.543 { 00:42:04.543 "method": "sock_impl_set_options", 00:42:04.543 "params": { 00:42:04.543 "impl_name": "posix", 00:42:04.544 "recv_buf_size": 2097152, 00:42:04.544 "send_buf_size": 2097152, 00:42:04.544 "enable_recv_pipe": true, 00:42:04.544 "enable_quickack": false, 00:42:04.544 "enable_placement_id": 0, 00:42:04.544 "enable_zerocopy_send_server": true, 00:42:04.544 "enable_zerocopy_send_client": false, 00:42:04.544 "zerocopy_threshold": 0, 00:42:04.544 "tls_version": 0, 00:42:04.544 "enable_ktls": false 00:42:04.544 } 00:42:04.544 } 00:42:04.544 ] 00:42:04.544 }, 00:42:04.544 { 00:42:04.544 "subsystem": "vmd", 00:42:04.544 "config": [] 00:42:04.544 }, 00:42:04.544 { 00:42:04.544 "subsystem": "accel", 00:42:04.544 "config": [ 00:42:04.544 { 00:42:04.544 "method": "accel_set_options", 00:42:04.544 "params": { 00:42:04.544 "small_cache_size": 128, 00:42:04.544 "large_cache_size": 16, 00:42:04.544 "task_count": 2048, 00:42:04.544 "sequence_count": 2048, 00:42:04.544 "buf_count": 2048 00:42:04.544 } 00:42:04.544 } 00:42:04.544 ] 00:42:04.544 }, 00:42:04.544 { 00:42:04.544 "subsystem": "bdev", 00:42:04.544 "config": [ 00:42:04.544 { 00:42:04.544 "method": "bdev_set_options", 00:42:04.544 "params": { 00:42:04.544 "bdev_io_pool_size": 65535, 00:42:04.544 "bdev_io_cache_size": 256, 00:42:04.544 "bdev_auto_examine": true, 00:42:04.544 "iobuf_small_cache_size": 128, 00:42:04.544 "iobuf_large_cache_size": 16 00:42:04.544 } 00:42:04.544 }, 00:42:04.544 { 00:42:04.544 "method": "bdev_raid_set_options", 00:42:04.544 "params": { 00:42:04.544 "process_window_size_kb": 1024, 00:42:04.544 "process_max_bandwidth_mb_sec": 0 00:42:04.544 } 00:42:04.544 }, 00:42:04.544 { 00:42:04.544 "method": "bdev_iscsi_set_options", 00:42:04.544 "params": { 00:42:04.544 "timeout_sec": 30 00:42:04.544 } 00:42:04.544 }, 00:42:04.544 { 00:42:04.544 "method": "bdev_nvme_set_options", 00:42:04.544 "params": { 00:42:04.544 "action_on_timeout": "none", 00:42:04.544 "timeout_us": 0, 00:42:04.544 "timeout_admin_us": 0, 00:42:04.544 "keep_alive_timeout_ms": 10000, 00:42:04.544 "arbitration_burst": 0, 00:42:04.544 "low_priority_weight": 0, 00:42:04.544 "medium_priority_weight": 0, 00:42:04.544 "high_priority_weight": 0, 00:42:04.544 "nvme_adminq_poll_period_us": 10000, 00:42:04.544 "nvme_ioq_poll_period_us": 0, 00:42:04.544 "io_queue_requests": 512, 00:42:04.544 "delay_cmd_submit": true, 00:42:04.544 "transport_retry_count": 4, 00:42:04.544 "bdev_retry_count": 3, 00:42:04.544 "transport_ack_timeout": 0, 00:42:04.544 "ctrlr_loss_timeout_sec": 0, 00:42:04.544 "reconnect_delay_sec": 0, 00:42:04.544 "fast_io_fail_timeout_sec": 0, 00:42:04.544 "disable_auto_failback": false, 00:42:04.544 "generate_uuids": false, 00:42:04.544 "transport_tos": 0, 00:42:04.544 "nvme_error_stat": false, 00:42:04.544 "rdma_srq_size": 0, 00:42:04.544 "io_path_stat": false, 00:42:04.544 "allow_accel_sequence": false, 00:42:04.544 "rdma_max_cq_size": 0, 00:42:04.544 "rdma_cm_event_timeout_ms": 0, 00:42:04.544 "dhchap_digests": [ 00:42:04.544 "sha256", 00:42:04.544 "sha384", 00:42:04.544 "sha512" 00:42:04.544 ], 00:42:04.544 "dhchap_dhgroups": [ 00:42:04.544 "null", 00:42:04.544 "ffdhe2048", 00:42:04.544 "ffdhe3072", 00:42:04.544 "ffdhe4096", 00:42:04.544 "ffdhe6144", 00:42:04.544 "ffdhe8192" 00:42:04.544 ] 00:42:04.544 } 00:42:04.544 }, 00:42:04.544 { 00:42:04.544 "method": "bdev_nvme_attach_controller", 00:42:04.544 "params": { 00:42:04.544 "name": "nvme0", 00:42:04.544 "trtype": "TCP", 00:42:04.544 "adrfam": "IPv4", 00:42:04.544 "traddr": "127.0.0.1", 00:42:04.544 "trsvcid": "4420", 00:42:04.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:04.544 "prchk_reftag": false, 00:42:04.544 "prchk_guard": false, 00:42:04.544 "ctrlr_loss_timeout_sec": 0, 00:42:04.544 "reconnect_delay_sec": 0, 00:42:04.544 "fast_io_fail_timeout_sec": 0, 00:42:04.544 "psk": "key0", 00:42:04.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:04.544 "hdgst": false, 00:42:04.544 "ddgst": false, 00:42:04.544 "multipath": "multipath" 00:42:04.544 } 00:42:04.544 }, 00:42:04.544 { 00:42:04.544 "method": "bdev_nvme_set_hotplug", 00:42:04.544 "params": { 00:42:04.544 "period_us": 100000, 00:42:04.544 "enable": false 00:42:04.544 } 00:42:04.544 }, 00:42:04.544 { 00:42:04.544 "method": "bdev_wait_for_examine" 00:42:04.544 } 00:42:04.544 ] 00:42:04.544 }, 00:42:04.544 { 00:42:04.544 "subsystem": "nbd", 00:42:04.544 "config": [] 00:42:04.544 } 00:42:04.544 ] 00:42:04.544 }' 00:42:04.804 [2024-11-20 06:52:24.820645] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:42:04.804 [2024-11-20 06:52:24.820700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173350 ] 00:42:04.804 [2024-11-20 06:52:24.902910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:04.804 [2024-11-20 06:52:24.932042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:04.804 [2024-11-20 06:52:25.075749] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:05.372 06:52:25 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:05.372 06:52:25 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:42:05.372 06:52:25 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:05.372 06:52:25 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:05.372 06:52:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:05.630 06:52:25 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:05.630 06:52:25 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:05.630 06:52:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:05.630 06:52:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:05.630 06:52:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:05.630 06:52:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:05.630 06:52:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:05.890 06:52:25 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:05.891 06:52:25 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:05.891 06:52:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:05.891 06:52:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:05.891 06:52:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:05.891 06:52:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:05.891 06:52:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:05.891 06:52:26 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:05.891 06:52:26 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:05.891 06:52:26 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:05.891 06:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:06.151 06:52:26 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:06.151 06:52:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:06.151 06:52:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.pzQs2Sj7BK /tmp/tmp.uLoQhJuhU4 00:42:06.151 06:52:26 keyring_file -- keyring/file.sh@20 -- # killprocess 3173350 00:42:06.151 06:52:26 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3173350 ']' 00:42:06.151 06:52:26 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3173350 00:42:06.151 06:52:26 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:06.151 06:52:26 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:06.151 06:52:26 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3173350 00:42:06.151 06:52:26 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:06.151 06:52:26 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:06.151 06:52:26 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3173350' 00:42:06.151 killing process with pid 3173350 00:42:06.151 06:52:26 keyring_file -- common/autotest_common.sh@971 -- # kill 3173350 00:42:06.151 Received shutdown signal, test time was about 1.000000 seconds 00:42:06.151 00:42:06.151 Latency(us) 00:42:06.151 [2024-11-20T05:52:26.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:06.151 [2024-11-20T05:52:26.430Z] =================================================================================================================== 00:42:06.151 [2024-11-20T05:52:26.430Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:06.151 06:52:26 keyring_file -- common/autotest_common.sh@976 -- # wait 3173350 00:42:06.410 06:52:26 keyring_file -- keyring/file.sh@21 -- # killprocess 3171487 00:42:06.410 06:52:26 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3171487 ']' 00:42:06.410 06:52:26 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3171487 00:42:06.410 06:52:26 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:06.410 06:52:26 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:06.410 06:52:26 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3171487 00:42:06.410 06:52:26 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:06.410 06:52:26 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:06.410 06:52:26 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3171487' 00:42:06.410 killing process with pid 3171487 00:42:06.410 06:52:26 keyring_file -- common/autotest_common.sh@971 -- # kill 3171487 00:42:06.410 06:52:26 keyring_file -- common/autotest_common.sh@976 -- # wait 3171487 00:42:06.670 00:42:06.670 real 0m11.970s 00:42:06.670 user 0m28.960s 00:42:06.670 sys 0m2.667s 00:42:06.670 06:52:26 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:06.670 06:52:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:06.670 ************************************ 00:42:06.670 END TEST keyring_file 00:42:06.670 ************************************ 00:42:06.670 06:52:26 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:42:06.670 06:52:26 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:06.670 06:52:26 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:42:06.670 06:52:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:06.670 06:52:26 -- common/autotest_common.sh@10 -- # set +x 00:42:06.670 ************************************ 00:42:06.670 START TEST keyring_linux 00:42:06.670 ************************************ 00:42:06.670 06:52:26 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:06.670 Joined session keyring: 453617235 00:42:06.670 * Looking for test storage... 00:42:06.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:06.670 06:52:26 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:06.670 06:52:26 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:42:06.670 06:52:26 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:06.930 06:52:26 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:06.930 06:52:26 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:06.930 06:52:27 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:06.930 06:52:27 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:06.930 06:52:27 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:06.930 06:52:27 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:06.930 06:52:27 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:06.930 06:52:27 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:06.930 06:52:27 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:06.930 06:52:27 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:06.930 06:52:27 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:06.930 06:52:27 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:06.930 06:52:27 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:06.930 06:52:27 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:06.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.930 --rc genhtml_branch_coverage=1 00:42:06.930 --rc genhtml_function_coverage=1 00:42:06.930 --rc genhtml_legend=1 00:42:06.930 --rc geninfo_all_blocks=1 00:42:06.930 --rc geninfo_unexecuted_blocks=1 00:42:06.930 00:42:06.930 ' 00:42:06.930 06:52:27 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:06.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.930 --rc genhtml_branch_coverage=1 00:42:06.930 --rc genhtml_function_coverage=1 00:42:06.930 --rc genhtml_legend=1 00:42:06.930 --rc geninfo_all_blocks=1 00:42:06.930 --rc geninfo_unexecuted_blocks=1 00:42:06.930 00:42:06.930 ' 00:42:06.930 06:52:27 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:06.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.930 --rc genhtml_branch_coverage=1 00:42:06.931 --rc genhtml_function_coverage=1 00:42:06.931 --rc genhtml_legend=1 00:42:06.931 --rc geninfo_all_blocks=1 00:42:06.931 --rc geninfo_unexecuted_blocks=1 00:42:06.931 00:42:06.931 ' 00:42:06.931 06:52:27 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:06.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.931 --rc genhtml_branch_coverage=1 00:42:06.931 --rc genhtml_function_coverage=1 00:42:06.931 --rc genhtml_legend=1 00:42:06.931 --rc geninfo_all_blocks=1 00:42:06.931 --rc geninfo_unexecuted_blocks=1 00:42:06.931 00:42:06.931 ' 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:06.931 06:52:27 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:06.931 06:52:27 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:06.931 06:52:27 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.931 06:52:27 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.931 06:52:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.931 06:52:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.931 06:52:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.931 06:52:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:06.931 06:52:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:06.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:06.931 /tmp/:spdk-test:key0 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:06.931 06:52:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:06.931 06:52:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:06.931 /tmp/:spdk-test:key1 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3173791 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3173791 00:42:06.931 06:52:27 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:06.931 06:52:27 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3173791 ']' 00:42:06.931 06:52:27 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:06.931 06:52:27 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:06.931 06:52:27 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:06.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:06.931 06:52:27 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:06.931 06:52:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:06.931 [2024-11-20 06:52:27.195140] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:42:06.932 [2024-11-20 06:52:27.195203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173791 ] 00:42:07.192 [2024-11-20 06:52:27.279952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:07.192 [2024-11-20 06:52:27.310328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:07.763 06:52:27 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:07.763 06:52:27 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:42:07.763 06:52:27 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:07.763 06:52:27 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:07.763 06:52:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:07.763 [2024-11-20 06:52:27.977930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:07.763 null0 00:42:07.763 [2024-11-20 06:52:28.009989] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:07.763 [2024-11-20 06:52:28.010332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:07.763 06:52:28 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:07.763 06:52:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:07.763 361881188 00:42:07.763 06:52:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:07.763 464934626 00:42:08.023 06:52:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3174119 00:42:08.023 06:52:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3174119 /var/tmp/bperf.sock 00:42:08.023 06:52:28 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:08.023 06:52:28 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3174119 ']' 00:42:08.023 06:52:28 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:08.023 06:52:28 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:08.023 06:52:28 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:08.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:08.023 06:52:28 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:08.023 06:52:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:08.023 [2024-11-20 06:52:28.088705] Starting SPDK v25.01-pre git sha1 ac2633210 / DPDK 24.03.0 initialization... 00:42:08.023 [2024-11-20 06:52:28.088754] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174119 ] 00:42:08.023 [2024-11-20 06:52:28.169889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:08.023 [2024-11-20 06:52:28.199422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:08.994 06:52:28 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:08.994 06:52:28 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:42:08.994 06:52:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:08.994 06:52:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:08.994 06:52:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:08.994 06:52:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:09.288 06:52:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:09.288 06:52:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:09.288 [2024-11-20 06:52:29.432662] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:09.288 nvme0n1 00:42:09.288 06:52:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:09.288 06:52:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:09.288 06:52:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:09.288 06:52:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:09.288 06:52:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:09.288 06:52:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:09.598 06:52:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:09.598 06:52:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:09.598 06:52:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:09.598 06:52:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:09.598 06:52:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:09.598 06:52:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:09.598 06:52:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:09.859 06:52:29 keyring_linux -- keyring/linux.sh@25 -- # sn=361881188 00:42:09.859 06:52:29 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:09.859 06:52:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:09.859 06:52:29 keyring_linux -- keyring/linux.sh@26 -- # [[ 361881188 == \3\6\1\8\8\1\1\8\8 ]] 00:42:09.859 06:52:29 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 361881188 00:42:09.859 06:52:29 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:09.859 06:52:29 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:09.859 Running I/O for 1 seconds... 00:42:10.799 24405.00 IOPS, 95.33 MiB/s 00:42:10.799 Latency(us) 00:42:10.799 [2024-11-20T05:52:31.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:10.799 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:10.799 nvme0n1 : 1.01 24405.29 95.33 0.00 0.00 5229.03 2225.49 6580.91 00:42:10.799 [2024-11-20T05:52:31.078Z] =================================================================================================================== 00:42:10.799 [2024-11-20T05:52:31.078Z] Total : 24405.29 95.33 0.00 0.00 5229.03 2225.49 6580.91 00:42:10.799 { 00:42:10.799 "results": [ 00:42:10.799 { 00:42:10.799 "job": "nvme0n1", 00:42:10.799 "core_mask": "0x2", 00:42:10.799 "workload": "randread", 00:42:10.799 "status": "finished", 00:42:10.799 "queue_depth": 128, 00:42:10.799 "io_size": 4096, 00:42:10.799 "runtime": 1.005233, 00:42:10.799 "iops": 24405.287132435962, 00:42:10.799 "mibps": 95.33315286107798, 00:42:10.799 "io_failed": 0, 00:42:10.799 "io_timeout": 0, 00:42:10.799 "avg_latency_us": 5229.033550999335, 00:42:10.799 "min_latency_us": 2225.4933333333333, 00:42:10.799 "max_latency_us": 6580.906666666667 00:42:10.799 } 00:42:10.799 ], 00:42:10.799 "core_count": 1 00:42:10.799 } 00:42:10.799 06:52:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:10.799 06:52:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:11.059 06:52:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:11.059 06:52:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:11.060 06:52:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:11.060 06:52:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:11.060 06:52:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:11.060 06:52:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:11.320 06:52:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:11.320 [2024-11-20 06:52:31.500661] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:11.320 [2024-11-20 06:52:31.501436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa17840 (107): Transport endpoint is not connected 00:42:11.320 [2024-11-20 06:52:31.502432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa17840 (9): Bad file descriptor 00:42:11.320 [2024-11-20 06:52:31.503434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:11.320 [2024-11-20 06:52:31.503447] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:11.320 [2024-11-20 06:52:31.503453] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:11.320 [2024-11-20 06:52:31.503460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:11.320 request: 00:42:11.320 { 00:42:11.320 "name": "nvme0", 00:42:11.320 "trtype": "tcp", 00:42:11.320 "traddr": "127.0.0.1", 00:42:11.320 "adrfam": "ipv4", 00:42:11.320 "trsvcid": "4420", 00:42:11.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:11.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:11.320 "prchk_reftag": false, 00:42:11.320 "prchk_guard": false, 00:42:11.320 "hdgst": false, 00:42:11.320 "ddgst": false, 00:42:11.320 "psk": ":spdk-test:key1", 00:42:11.320 "allow_unrecognized_csi": false, 00:42:11.320 "method": "bdev_nvme_attach_controller", 00:42:11.320 "req_id": 1 00:42:11.320 } 00:42:11.320 Got JSON-RPC error response 00:42:11.320 response: 00:42:11.320 { 00:42:11.320 "code": -5, 00:42:11.320 "message": "Input/output error" 00:42:11.320 } 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@33 -- # sn=361881188 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 361881188 00:42:11.320 1 links removed 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@33 -- # sn=464934626 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 464934626 00:42:11.320 1 links removed 00:42:11.320 06:52:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3174119 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3174119 ']' 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3174119 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:11.320 06:52:31 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3174119 00:42:11.580 06:52:31 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:11.580 06:52:31 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:11.580 06:52:31 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3174119' 00:42:11.580 killing process with pid 3174119 00:42:11.580 06:52:31 keyring_linux -- common/autotest_common.sh@971 -- # kill 3174119 00:42:11.580 Received shutdown signal, test time was about 1.000000 seconds 00:42:11.580 00:42:11.580 Latency(us) 00:42:11.580 [2024-11-20T05:52:31.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.580 [2024-11-20T05:52:31.859Z] =================================================================================================================== 00:42:11.580 [2024-11-20T05:52:31.859Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:11.580 06:52:31 keyring_linux -- common/autotest_common.sh@976 -- # wait 3174119 00:42:11.580 06:52:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3173791 00:42:11.580 06:52:31 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3173791 ']' 00:42:11.580 06:52:31 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3173791 00:42:11.580 06:52:31 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:42:11.581 06:52:31 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:11.581 06:52:31 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3173791 00:42:11.581 06:52:31 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:11.581 06:52:31 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:11.581 06:52:31 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3173791' 00:42:11.581 killing process with pid 3173791 00:42:11.581 06:52:31 keyring_linux -- common/autotest_common.sh@971 -- # kill 3173791 00:42:11.581 06:52:31 keyring_linux -- common/autotest_common.sh@976 -- # wait 3173791 00:42:11.842 00:42:11.842 real 0m5.147s 00:42:11.842 user 0m9.551s 00:42:11.842 sys 0m1.442s 00:42:11.842 06:52:31 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:11.842 06:52:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:11.842 ************************************ 00:42:11.842 END TEST keyring_linux 00:42:11.842 ************************************ 00:42:11.842 06:52:31 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:11.842 06:52:31 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:42:11.842 06:52:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:11.842 06:52:31 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:11.842 06:52:31 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:42:11.842 06:52:31 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:42:11.842 06:52:31 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:42:11.842 06:52:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:11.842 06:52:31 -- common/autotest_common.sh@10 -- # set +x 00:42:11.842 06:52:32 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:42:11.842 06:52:32 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:42:11.842 06:52:32 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:42:11.842 06:52:32 -- common/autotest_common.sh@10 -- # set +x 00:42:19.977 INFO: APP EXITING 00:42:19.977 INFO: killing all VMs 00:42:19.977 INFO: killing vhost app 00:42:19.977 WARN: no vhost pid file found 00:42:19.977 INFO: EXIT DONE 00:42:23.277 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:65:00.0 (144d a80a): Already using the nvme driver 00:42:23.277 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:42:23.277 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:42:23.278 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:42:23.278 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:42:23.278 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:42:23.278 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:42:27.484 Cleaning 00:42:27.484 Removing: /var/run/dpdk/spdk0/config 00:42:27.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:27.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:27.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:27.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:27.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:27.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:27.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:27.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:27.484 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:27.484 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:27.484 Removing: /var/run/dpdk/spdk1/config 00:42:27.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:27.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:27.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:27.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:27.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:27.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:27.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:27.484 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:27.484 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:27.484 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:27.484 Removing: /var/run/dpdk/spdk2/config 00:42:27.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:27.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:27.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:27.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:27.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:27.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:27.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:27.484 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:27.484 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:27.484 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:27.484 Removing: /var/run/dpdk/spdk3/config 00:42:27.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:27.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:27.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:27.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:27.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:27.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:27.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:27.484 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:27.484 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:27.484 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:27.484 Removing: /var/run/dpdk/spdk4/config 00:42:27.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:27.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:27.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:27.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:27.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:27.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:27.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:27.484 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:27.484 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:27.484 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:27.484 Removing: /dev/shm/bdev_svc_trace.1 00:42:27.484 Removing: /dev/shm/nvmf_trace.0 00:42:27.484 Removing: /dev/shm/spdk_tgt_trace.pid2595171 00:42:27.484 Removing: /var/run/dpdk/spdk0 00:42:27.484 Removing: /var/run/dpdk/spdk1 00:42:27.484 Removing: /var/run/dpdk/spdk2 00:42:27.484 Removing: /var/run/dpdk/spdk3 00:42:27.484 Removing: /var/run/dpdk/spdk4 00:42:27.484 Removing: /var/run/dpdk/spdk_pid2593571 00:42:27.484 Removing: /var/run/dpdk/spdk_pid2595171 00:42:27.484 Removing: /var/run/dpdk/spdk_pid2596018 00:42:27.484 Removing: /var/run/dpdk/spdk_pid2597521 00:42:27.484 Removing: /var/run/dpdk/spdk_pid2597861 00:42:27.484 Removing: /var/run/dpdk/spdk_pid2598920 00:42:27.484 Removing: /var/run/dpdk/spdk_pid2599111 00:42:27.484 Removing: /var/run/dpdk/spdk_pid2599398 00:42:27.484 Removing: /var/run/dpdk/spdk_pid2600535 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2601310 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2601673 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2602009 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2602375 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2602681 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2602987 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2603335 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2603730 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2604795 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2608355 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2608688 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2609010 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2609131 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2609546 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2609837 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2610218 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2610541 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2610764 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2610931 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2611217 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2611307 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2611805 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2612106 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2612503 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2617197 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2622425 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2634456 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2635309 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2640521 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2640883 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2646367 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2653915 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2657008 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2669553 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2680591 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2682611 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2683641 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2705207 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2710063 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2766798 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2773188 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2780352 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2788288 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2788291 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2789297 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2790299 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2791306 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2791976 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2791981 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2792319 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2792325 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2792337 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2793363 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2794363 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2795442 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2796043 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2796168 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2796400 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2797797 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2799197 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2809519 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2844199 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2849618 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2851621 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2853802 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2853993 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2854335 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2854675 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2855399 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2857459 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2858818 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2859198 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2861916 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2862619 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2863332 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2868385 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2874983 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2874985 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2874987 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2879651 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2889879 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2895392 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2902480 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2904108 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2905752 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2907483 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2913057 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2918328 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2923357 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2932458 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2932466 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2937649 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2937860 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2938196 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2938633 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2938740 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2944245 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2944956 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2950815 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2954162 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2960722 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2967321 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2977516 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2986332 00:42:27.485 Removing: /var/run/dpdk/spdk_pid2986340 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3009783 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3010608 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3011433 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3012121 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3013183 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3013867 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3014552 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3015262 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3020604 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3020864 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3028001 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3028379 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3034838 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3039869 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3052047 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3052720 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3057769 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3058140 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3063176 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3070171 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3073235 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3085445 00:42:27.485 Removing: /var/run/dpdk/spdk_pid3096116 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3098088 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3099183 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3119336 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3124057 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3127245 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3135005 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3135013 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3140889 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3143309 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3145596 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3147027 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3149309 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3150931 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3161343 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3162005 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3162674 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3165573 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3166012 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3166645 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3171487 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3171534 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3173350 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3173791 00:42:27.747 Removing: /var/run/dpdk/spdk_pid3174119 00:42:27.747 Clean 00:42:27.747 06:52:47 -- common/autotest_common.sh@1451 -- # return 0 00:42:27.747 06:52:47 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:42:27.747 06:52:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:27.747 06:52:47 -- common/autotest_common.sh@10 -- # set +x 00:42:27.747 06:52:47 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:42:27.747 06:52:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:27.747 06:52:47 -- common/autotest_common.sh@10 -- # set +x 00:42:27.747 06:52:48 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:28.008 06:52:48 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:28.008 06:52:48 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:28.008 06:52:48 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:42:28.008 06:52:48 -- spdk/autotest.sh@394 -- # hostname 00:42:28.008 06:52:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:28.008 geninfo: WARNING: invalid characters removed from testname! 00:42:54.586 06:53:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:56.498 06:53:16 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:58.409 06:53:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:59.801 06:53:19 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:01.711 06:53:21 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:03.093 06:53:23 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:05.004 06:53:24 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:05.004 06:53:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:05.004 06:53:24 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:05.004 06:53:24 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:05.004 06:53:24 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:05.004 06:53:24 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:05.004 + [[ -n 2508768 ]] 00:43:05.004 + sudo kill 2508768 00:43:05.015 [Pipeline] } 00:43:05.033 [Pipeline] // stage 00:43:05.038 [Pipeline] } 00:43:05.053 [Pipeline] // timeout 00:43:05.058 [Pipeline] } 00:43:05.072 [Pipeline] // catchError 00:43:05.077 [Pipeline] } 00:43:05.091 [Pipeline] // wrap 00:43:05.096 [Pipeline] } 00:43:05.108 [Pipeline] // catchError 00:43:05.116 [Pipeline] stage 00:43:05.118 [Pipeline] { (Epilogue) 00:43:05.130 [Pipeline] catchError 00:43:05.132 [Pipeline] { 00:43:05.143 [Pipeline] echo 00:43:05.145 Cleanup processes 00:43:05.150 [Pipeline] sh 00:43:05.443 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:05.443 3187130 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:05.459 [Pipeline] sh 00:43:05.751 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:05.751 ++ grep -v 'sudo pgrep' 00:43:05.751 ++ awk '{print $1}' 00:43:05.751 + sudo kill -9 00:43:05.751 + true 00:43:05.764 [Pipeline] sh 00:43:06.055 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:18.304 [Pipeline] sh 00:43:18.594 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:18.594 Artifacts sizes are good 00:43:18.609 [Pipeline] archiveArtifacts 00:43:18.617 Archiving artifacts 00:43:18.808 [Pipeline] sh 00:43:19.145 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:19.165 [Pipeline] cleanWs 00:43:19.186 [WS-CLEANUP] Deleting project workspace... 00:43:19.186 [WS-CLEANUP] Deferred wipeout is used... 00:43:19.205 [WS-CLEANUP] done 00:43:19.210 [Pipeline] } 00:43:19.222 [Pipeline] // catchError 00:43:19.231 [Pipeline] sh 00:43:19.515 + logger -p user.info -t JENKINS-CI 00:43:19.526 [Pipeline] } 00:43:19.538 [Pipeline] // stage 00:43:19.543 [Pipeline] } 00:43:19.556 [Pipeline] // node 00:43:19.609 [Pipeline] End of Pipeline 00:43:19.646 Finished: SUCCESS